Hyperparameter Optimization for TensorFlow, Keras and PyTorch

Overview


Talos

Hyperparameter Optimization for Keras

Talos Travis Talos Coveralls

TalosKey FeaturesExamplesInstallSupportDocsIssuesLicenseDownload


Talos radically changes the ordinary Keras workflow by fully automating hyperparameter tuning and model evaluation. Talos exposes Keras functionality entirely and there is no new syntax or templates to learn.

Talos

TL;DR

Talos radically transforms ordinary Keras workflows without taking away any of Keras.

  • works with ANY Keras model
  • takes minutes to implement
  • no new syntax to learn
  • adds zero new overhead to your workflow

Talos is made for data scientists and data engineers that want to remain in complete control of their Keras models, but are tired of mindless parameter hopping and confusing optimization solutions that add complexity instead of reducing it. Within minutes, without learning any new syntax, Talos allows you to configure, perform, and evaluate hyperparameter optimization experiments that yield state-of-the-art results across a wide range of prediction tasks. Talos provides the simplest and yet most powerful available method for hyperparameter optimization with Keras.


🔧 Key Features

Based on what no doubt constitutes a "biased" review (being our own) of more than ~30 hyperparameter tuning and optimization solutions, Talos comes on top in terms of intuitive, easy-to-learn, highly permissive access to critical hyperparameter optimization capabilities. Key features include:

  • Single-line optimize-to-predict pipeline talos.Scan(x, y, model, params).predict(x_test, y_test)
  • Automated hyperparameter optimization
  • Model generalization evaluator
  • Experiment analytics
  • Pseudo, Quasi, and Quantum Random search options
  • Grid search
  • Probabilistic optimizers
  • Single file custom optimization strategies
  • Dynamically change optimization strategy during experiment
  • Support for man-machine cooperative optimization strategy
  • Model candidate generality evaluation
  • Live training monitor
  • Experiment analytics

Talos works on Linux, Mac OSX, and Windows systems and can be operated cpu, gpu, and multi-gpu systems.


▶️ Examples

Get the below code here. More examples further below.

The Simple example below is more than enough for starting to use Talos with any Keras model. Field Report has +2,600 claps on Medium because it's more entertaining.

Simple [1-2 mins]

Concise [~5 mins]

Comprehensive [~10 mins]

Field Report [~15 mins]

For more information on how Talos can help with your Keras workflow, visit the User Manual.

You may also want to check out a visualization of the Talos Hyperparameter Tuning workflow.


💾 Install

Stable version:

pip install talos

Daily development version:

pip install git+https://github.com/autonomio/talos


💬 How to get Support

I want to... Go to...
...troubleshoot Docs · Wiki · GitHub Issue Tracker
...report a bug GitHub Issue Tracker
...suggest a new feature GitHub Issue Tracker
...get support Stack Overflow · Spectrum Chat
...have a discussion Spectrum Chat

📢 Citations

If you use Talos for published work, please cite:

Autonomio Talos [Computer software]. (2019). Retrieved from http://github.com/autonomio/talos.


📃 License

MIT License

Comments
  • allow use of generators with fit_generator()

    allow use of generators with fit_generator()

    It seems that the only thing that needs to change is the way validation split is now handled internally. Two options:

    • add a parameter "use_generator=True"
    • detect the type of the validation_data parameter

    Then the user will have to pass data in to Scan() slightly differently as well. So this needs to be thought about a little.

    Also it seems that Keras fit_generator has some memory (leakish) issue which have been reported in many instances, so this will have to be looked in to as well.

    topic: help wanted 
    opened by mikkokotila 50
  • ResourceExhaustedError after several iterations in a grid search

    ResourceExhaustedError after several iterations in a grid search

    First off, make sure to check your support options.

    The preferred way to resolve usage related matters is through the docs which are maintained up-to-date with the latest version of Talos.

    If you do end up asking for support in a new issue, make sure to follow the below steps carefully.

    1) Confirm the below

    • [x] I have looked for an answer in the Docs
    • [x] My Python version is 3.5 or higher
    • [x] I have searched through the issues Issues for a duplicate
    • [x] I've tested that my Keras model works as a stand-alone

    2) Include the output of:

    talos.__version__ == 0.6.7

    3) Explain clearly what you are trying to achieve

    I am running a grid search that gives 36 rounds. After about 4 or 5 rounds, during a model.fit I suddenly get hit by a ResourceExhaustedError. I think this is very odd given that I am able to complete at least 3 rounds of fitting on the GPU (with a model and batch size that takes up pretty much all the gpu memory), so it seems that there is a small but significant memory leak somewhere. Any ideas what it could be?

    priority: MEDIUM topic: tensorflow value: ⭐⭐⭐ 
    opened by bjtho08 33
  • How do I pass in a list of inputs?

    How do I pass in a list of inputs?

    In most talos code examples, X input seems to be a 2D numpy array. But in some of my keras models, it requires a list of 2D numpy arrays because my models are not Sequential. Example list: [numpyarray1, numpyarray2, numpyarray3].

    It seems that when I try to pass this list to the ta.Scan function, things break.

    How do I pass in a list of inputs?

    Also, I'm not sure if talos supports pandas DataFrame or not. But I think it does not support so please do make support for it.

    opened by off99555 33
  • Reporting Data has incorrect column associations to frame

    Reporting Data has incorrect column associations to frame

    • [x] I'm up-to-date with the latest release:

      pip install -U talos
      
    • [x] I've confirmed that my Keras model works outside of Talos.


    I noticed that after scanning a Parameter dictionary, the inner data object of the Reporter has the column/row associations incorrectly ordered in Python 2.7.

    For example, something like:

    p = {
            'compile_loss': ['mean_squared_error'], 
            'compile_optimizer': ['sgd'], 
            'hidden_units': [64, 128, 512],
            'inner_activation': ['relu'],
            'output_activation': ['relu'], 
            'recurrent_activation': ['relu'],
            'lstm_layers': [0],
            'gru_layers': [0],
            'dropout_ratio': [.2],
            'activate_regularizers': [1, 0],
            'batch_window_size': [ 5, 50 ],
            'epochs': [ 100 ]
        }
    
        scan = ta.Scan(train_inputs, train_outputs, p, self._create_keras_model)
        reporting = ta.Reporting(scan)
        print(reporting.data.columns)
    

    Prints out:

       round_epochs                  acc         loss              val_acc  \
    1           10  0.20000000298023224  315008640.0  0.20000000298023224   
    2           10  0.30000001192092896  170956672.0  0.20000000298023224   
    
          val_loss    lr recurrent_activation inner_activation  \
    1  308987328.0  0.01                  0.2             relu   
    2  169688720.0  0.01                  0.2             relu   
    
      activate_regularizers epochs lstm_layers dropout_ratio hidden_units  \
    1    mean_squared_error      0        relu            50          sgd   
    2    mean_squared_error      0        relu            50          sgd   
    
      compile_loss gru_layers batch_window_size compile_optimizer  \
    1           10          0               128              relu   
    2           10          0               512              relu   
    
      output_activation  
    1                 0  
    
    

    You can see in the output above, the scan values appear to be one column index off, which leads to incorrect reporting and difficulty reconstructing the best parameter associations.

    This originally popped up when I realized that best_params is an array, leaving no real clear way of reconstructing the original parameter dictionary associations for storing the parms for replay later.

    The ask is to simply offer a way of extracting the best_params in a format that allows restoration of the values to the original parameter keys.

    BTW, thank you for this module.

    investigation 
    opened by toddpi314 28
  • Modify ParamGrid to compute only the part of the grid in the selected downsample

    Modify ParamGrid to compute only the part of the grid in the selected downsample

    There are some outstanding issues regarding shuffle and stratified that needs to be tested, otherwise seems to work. More testing is of cource prudent

    opened by JohanMollevik 27
  • re: Memory leak problem (tensorflow backend)

    re: Memory leak problem (tensorflow backend)

    Hi,

    Thanks for your quick answer to the issue # 342. Unfortunately, the option "clear_tf_session = True" doesn't help. I had already tried it without success. I also tried to put gc.collect() at the start of the function where you build the model.

    Finally, I played with some toy example (dataset with 6000 training examples), and I noticed that after the end of the iterations, the memory used by python gets very large in comparison with the memory usage at the start even if I remove all the variables in the workspace and do gc.collect().

    I've attached my code for reference. It is a simple problem where I try to predict the verb tense in a sentence (7 possible tenses) from the surrounding words (limited to the 10000 most frequent words).

    Cheers,

    FNN_tense1Verb_paramsearch.zip

    topic: performance 
    opened by Adnane017 26
  • How to use f1 measure for

    How to use f1 measure for "best" model?

    When I import from talos.metrics.keras_metrics import fbeta_score and compile the model with this metric, then run talos with the parameter reduction_metric="fbeta_score" the output csv seems to list the the val_acc of the best epoch for val_acc, but only the first epoch's value for fbeta_score. This seems like something is going wrong, as if anything it should be producing the corresponding fbeta_score for that epoch I would have though.

    I am not interested in accuracy due to class imbalance in my system, and the accuracy saturates after a few epochs, so I need to talos to store for each parameter combination either:

    a) The result of the last epoch b) Ideally the result with the best fbeta_score

    Given that fbeta_score has been implemented, I assume this must be possible but I don't see how.

    I am using the latest dev branch v0.2 (as I have augmented data, I needed the functionality to supply x_val and y_val as parameters). In order to run this code without bugs, I needed to change; talos/metrics/score_model.py line 17 from y_pred = self.keras_model.predict_classes(self.x_val) to y_pred = self.keras_model.predict(self.x_val)

    Which might be related to my problem.

    priority: HIGH investigation topic: documentation 
    opened by bml1g12 26
  • Support working with huge parameter spaces

    Support working with huge parameter spaces

    I am experimenting with setting up talos to explore the hyperparameter space of my problem. Naively I thought that I could just add mor or less continous ranges of my hyperparameters and let talos random sampling and round_limit sort out the huge parameter space, 10^16 premutations by my estimate.

    Doing this I wrote this parameter list

        params = {
                'epoch': [2],
                'batch_size': range(1,128,1),
                'activation': [relu], # TODO use for all layers or define different
                # convultion layers in the begining
                'conv_hidden_layers': range(1,4,1),
                'conv_depth_shape': ['funnel','long_funnel','rhombus','brick','diamond','hexagon','triangle','stairs'],
                'conv_size_shape': ['funnel','long_funnel','rhombus','brick','diamond','hexagon','triangle','stairs'],
                'conv_depth_first_neuron': range(10,100,1),
                'conv_depth_last_neuron': range(5,100,1),
                'conv_size_first_neuron': range(3,15,1),
                'conv_size_last_neuron': range(3,15,1),
                # fully connected layers at the end
                'first_neuron': range(2,128,1),
                'last_neuron': range(2,128,1),
                'shapes': ['funnel','long_funnel','rhombus','brick','diamond','hexagon','triangle','stairs'],
                'hidden_layers': range(0,5,1),
        }
    

    which did not work that well. Looking at the talos source it seems that this line in in talos/parameters/ParamGrid.py is where it fails

    _param_grid_out = array(list(product(*ls)), dtype='object')
    

    When this tries to build the premutations explicitly in the list function it runs out of memmory and fails.

    Can we have a feature where we do not build up the parameter space in memmory?

    (I will use some workaround for now so consider this a feature request)

    investigation 
    opened by JohanMollevik 25
  • can't reproduce results after restore a model

    can't reproduce results after restore a model

    After a scan I save best models for metric 'val_acc', 'val_loss' ... When I try to restore a model and pick the best for 'val_acc' metric, if I test that model on train data and validation data I get different results than report why?

    ta.Deploy(scan,exp_acc_filename,metric= val_acc)
    r_acc4=ta.Restore(path)
    model4=r_acc4.model
    y_pred_train=model4.predict_classes(X_train)
    accuracy_score(y_train,y_pred_train) #output 90% but should be 92%
    
    question topic: keras 
    opened by davide1993 25
  • Add ability to filter out unwanted premutations

    Add ability to filter out unwanted premutations

    I propose to fix https://github.com/autonomio/talos/issues/223 by this pull request

    The code is used like

    talos.Scan(
        ...
        ,premutation_filter=lambda p: p['hidden_layers']<3 or p['first_neuron']<10)
    

    I have tested locally but not added any tests

    opened by JohanMollevik 22
  • Getting some kind of AttributeError

    Getting some kind of AttributeError

    Traceback (most recent call last): File "talosHyper.py", line 200, in experiment_no='1') File "/usr/local/lib/python3.6/dist-packages/talos/scan/Scan.py", line 166, in init self._null = self.runtime() File "/usr/local/lib/python3.6/dist-packages/talos/scan/Scan.py", line 170, in runtime self = scan_prepare(self) File "/usr/local/lib/python3.6/dist-packages/talos/scan/scan_prepare.py", line 62, in scan_prepare self.last_neuron = last_neuron(self) File "/usr/local/lib/python3.6/dist-packages/talos/utils/last_neuron.py", line 3, in last_neuron labels = list(set(self.y.flatten('F'))) File "/usr/local/lib/python3.6/dist-packages/pandas/core/generic.py", line 4376, in getattr return object.getattribute(self, name) AttributeError: 'Series' object has no attribute 'flatten'

    investigation 
    opened by Liquidten 20
  • Advanced hyperparameter configuration

    Advanced hyperparameter configuration

    1) I recommend:

    To talos.Scan(), add a parameter: "hyperparameter_config": for example, a dictionary in the format:

    KEY: 'name of hyperparameter as listed in talos.Scan(params)' ;

    VALUE: list of dictionaries, one dict for each config option for the hyperparameter named in the key, to add additional options like what np.choice offers (e.g size argument allowing an array of selections from a given param, and an option for selection [with | w/o] replacement).

    Example:

    params =\
        {'l4_upstream_conn_index':
            np.arange(1,3).tolist(),
         'l5_upstream_conn_index':
            np.arange(1,4).tolist(),
         'l6_upstream_conn_index':
            np.arange(1,5)).tolist(),
         'l7_upstream_conn_index':
            np.arange(1,6).tolist()}
    
    param_options = {
    
        'l4_upstream_conn_index':[
            {'size':3},
            { 'with_replacement':1}],  # Select 3 elements with replacement from l4_upstream_conn_index 
    
        'l5_upstream_conn_index':[
            {'size':4},
            { 'with_replacement':1}] , # Select 4 elements with replacement from l5_upstream_conn_index
    # ...
    }
    
    def make_skip_connection_model(params)
    
        layers = np.ones(7).tolist()
        
        layers[0] = Input((5,))
    
        layers[1] = Dense(7)(layers[0] ) 
        layers[2] = Dense(7)(layers[1] ) 
        layers[3] = Dense(7)(layers[2] ) 
    
        for i in np.arange(4,8):
            layers[i] =\
                Dense(
                    Concatenate([layers[c]) 
                                              for c in params[f'l{i}_upstream_conn_index'']],
                                               axis=1)
                )  # To make the skip connections to myltiple predecessor layers, this needs to make multiple selections from this parameter...
    
            out_layer = Dense(1,'sigmoid')(layers[-1])
            model = Model(inputs=layers[0],
                                     outputs=out_layer)
    
            model.compile(...)
            results = model.fit(...)
            
            return results, model 
    talos.scan(model=make_skip_connection_model,
                     params = params,
                     hyperparameter_config = param_options)
    
    
    opened by david-thrower 0
  • Can Talos Work with Unsupervised Learning on LSTM/Autoencoder Model

    Can Talos Work with Unsupervised Learning on LSTM/Autoencoder Model

    Hi, I am trying to use Talos to optimize the hyperparameters on an unsupervised LSTM/Autoencoder model. The model works without Talos. Since I do not have y data (no known labels / dependent variables), so I created my model as follows below. And the data input is called "scaled_data".

    set parameters for Talos

    p = {'optimizer': ['Nadam', 'Adam', 'sgd'], 'losses': ['binary_crossentropy', 'mse'], 'activation':['relu', 'elu']}

    create autoencoder model

    def create_model(X_input, y_input, params): autoencoder = Sequential() autoencoder.add(LSTM(12, input_shape=(scaled_data.shape[1], scaled_data.shape[2]), activation=params['activation'], return_sequences=True, kernel_regularizer=tf.keras.regularizers.l2(0.01))) autoencoder.add(LSTM(4, activation=params['activation'])) autoencoder.add(RepeatVector(scaled_data.shape[1])) autoencoder.add(LSTM(4, activation=params['activation'], return_sequences=True)) autoencoder.add(LSTM(12, activation=params['activation'], return_sequences=True)) autoencoder.add(TimeDistributed(Dense(scaled_data.shape[2]))) autoencoder.compile(optimizer=params['optimizer'], loss=params['losses'], metrics=['acc'])

    history = autoencoder.fit(X_input, y_input, epochs=10, batch_size=1, validation_split=0.0,
                              callbacks=[EarlyStopping(monitor='acc', patience=3)]).history
    
    return autoencoder, history
    

    scan_object = talos.Scan(x=scaled_data, y=scaled_data, params=p, model=create_model, experiment_name='LSTM')

    My error says: TypeError: create_model() takes 3 positional arguments but 5 were given.

    How am I passing 5 arguments? Any ideas how to fix this issue? I looked through the documents and other questions, but don't see anything with an unsupervised model. Thank you!

    discussion 
    opened by krwiegold 7
  • Skip certain parameter combinations in the parameter space

    Skip certain parameter combinations in the parameter space

    1) I think Talos should add a method for skip impossible combinations of parameters

    If for example, I want to test a CNN with MLP networks, some parameters, such as the kernel_size does not exists in certain combinations. Moreover, if I limit the time or the number of combinations I do not want to waste some impossible combinations.

    2) Once implemented, I can see how this feature will

    Although there are methods for skip this manually, I think it should be nice to use parameter space in the same way that ParametersGrid from scikit-learn does.

    For example:

    parameters_to_evaluate = [{
         'number_of_layers': [1, 2, 3, 4, 5, 6, 7, 8],
         'first_neuron': [8, 16, 48, 64, 128, 256],
         'shape': ['funnel', 'brick'],
         'architecture': ['bilstm', 'bigru'],
          'activation': ['relu', 'sigmoid']
    }, {
         'number_of_layers': [1, 2, 3, 4, 5, 6, 7, 8],
         'first_neuron': [8, 16, 48, 64, 128, 256],
         'shape': ['funnel', 'brick'],
         'kernel_size': [3, 5],
         'architecture': ['cnn'],
         'activation': ['relu', 'sigmoid']
    }]
    

    3) I believe this feature is

    • [ ] critically important
    • [ ] must have
    • [X ] nice to have

    4) Given the chance, I'd be happy to make a PR for this feature

    • [ ] definitely
    • [ ] possibly
    • [X] unlikely

    discussion 
    opened by Smolky 2
  • Return Analyze.best_params as dictionary

    Return Analyze.best_params as dictionary

    Currently Reporting.best_params will return an array containing the best parameter values. However, it will not return the corresponding parameter names and this makes it difficult to tell apart which value stands for which parameter.

    1) I think Talos should add

    In commands/analyze.py, I think it would be better if best_params returned the complete dataframe (out) instead of the values (out.values)

    2) Once implemented, I can see how this feature will

    It will be easier to understand which values correspond to which parameters

    3) I believe this feature is

    nice to have

    4) Given the chance, I'd be happy to make a PR for this feature

    definitely


    priority: MEDIUM value: ⭐ topic: experience 
    opened by rlleshi 3
  • Support for SavedModel output

    Support for SavedModel output

    Love Talos - thank you!

    1) I think Talos should add

    Support for SavedModel output in Deploy(). This is of course used by tf-serving and is becoming very popular.

    2) Once implemented, I can see how this feature will

    Make a team's workflow even more efficient in terms of getting it deployed into prod environments.

    3) I believe this feature is

    • [x] critically important
    • [ ] must have
    • [ ] nice to have

    4) Given the chance, I'd be happy to make a PR for this feature

    • [ ] definitely
    • [X] possibly - I am not sure my skills are up to it
    • [ ] unlikely

    Huge thanks once again!


    priority: MEDIUM value: ⭐⭐⭐ topic: production 
    opened by jtlz2 2
Releases(v1.3)
Owner
Autonomio
Machine Intelligence Workbench
Autonomio
Time Delayed NN implemented in pytorch

Pytorch Time Delayed NN Time Delayed NN implemented in PyTorch. Usage kernels = [(1, 25), (2, 50), (3, 75), (4, 100), (5, 125), (6, 150)] tdnn = TDNN

Daniil Gavrilov 79 Aug 04, 2022
A dual benchmarking study of visual forgery and visual forensics techniques

A dual benchmarking study of facial forgery and facial forensics In recent years, visual forgery has reached a level of sophistication that humans can

8 Jul 06, 2022
State of the art Semantic Sentence Embeddings

Contrastive Tension State of the art Semantic Sentence Embeddings Published Paper · Huggingface Models · Report Bug Overview This is the official code

Fredrik Carlsson 88 Dec 30, 2022
Deep learning algorithms for muon momentum estimation in the CMS Trigger System

Deep learning algorithms for muon momentum estimation in the CMS Trigger System The Compact Muon Solenoid (CMS) is a general-purpose detector at the L

anuragB 2 Oct 06, 2021
Checkout some cool self-projects you can try your hands on to curb your boredom this December!

SoC-Winter Checkout some cool self-projects you can try your hands on to curb your boredom this December! These are short projects that you can do you

Web and Coding Club, IIT Bombay 29 Nov 08, 2022
Title: Graduate-Admissions-Predictor

The purpose of this project is create a predictive model capable of identifying the probability of a person securing an admit based on their personal profile parameters. Simplified visualisations hav

Akarsh Singh 1 Jan 26, 2022
Exploring Classification Equilibrium in Long-Tailed Object Detection, ICCV2021

Exploring Classification Equilibrium in Long-Tailed Object Detection (LOCE, ICCV 2021) Paper Introduction The conventional detectors tend to make imba

52 Nov 21, 2022
Code for IntraQ, PyTorch implementation of our paper under review

IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Shot Network Quantization paper Requirements Python = 3.7.10 Pytorch == 1.7

1 Nov 19, 2021
RLDS stands for Reinforcement Learning Datasets

RLDS RLDS stands for Reinforcement Learning Datasets and it is an ecosystem of tools to store, retrieve and manipulate episodic data in the context of

Google Research 135 Jan 01, 2023
Learning to Identify Top Elo Ratings with A Dueling Bandits Approach

Learning to Identify Top Elo Ratings We propose two algorithms MaxIn-Elo and MaxIn-mElo to solve the top players identification on the transitive and

2 Jan 14, 2022
UFPR-ADMR-v2 Dataset

UFPR-ADMR-v2 Dataset The UFPR-ADMRv2 dataset contains 5,000 dial meter images obtained on-site by employees of the Energy Company of Paraná (Copel), w

Gabriel Salomon 8 Sep 29, 2022
A new codebase for Group Activity Recognition. It contains codes for ICCV 2021 paper: Spatio-Temporal Dynamic Inference Network for Group Activity Recognition and some other methods.

Spatio-Temporal Dynamic Inference Network for Group Activity Recognition The source codes for ICCV2021 Paper: Spatio-Temporal Dynamic Inference Networ

40 Dec 12, 2022
基于Paddle框架的fcanet复现

fcanet-Paddle 基于Paddle框架的fcanet复现 fcanet 本项目基于paddlepaddle框架复现fcanet,并参加百度第三届论文复现赛,将在2021年5月15日比赛完后提供AIStudio链接~敬请期待 参考项目: frazerlin-fcanet 数据准备 本项目已挂

QuanHao Guo 7 Mar 07, 2022
This code is a near-infrared spectrum modeling method based on PCA and pls

Nirs-Pls-Corn This code is a near-infrared spectrum modeling method based on PCA and pls 近红外光谱分析技术属于交叉领域,需要化学、计算机科学、生物科学等多领域的合作。为此,在(北邮邮电大学杨辉华老师团队)指导下

Fu Pengyou 6 Dec 17, 2022
NEG loss implemented in pytorch

Pytorch Negative Sampling Loss Negative Sampling Loss implemented in PyTorch. Usage neg_loss = NEG_loss(num_classes, embedding_size) optimizer =

Daniil Gavrilov 123 Sep 13, 2022
50-days-of-Statistics-for-Data-Science - This repository consist of a 50-day program

50-days-of-Statistics-for-Data-Science - This repository consist of a 50-day program. All the statistics required for the complete understanding of data science will be uploaded in this repository.

komal_lamba 22 Dec 09, 2022
Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task

multi-task_losses_optimizer Implement the Pareto Optimizer and pcgrad to make a self-adaptive loss for multi-task 已经实验过了,不会有cuda out of memory情况 ##Par

14 Dec 25, 2022
This is the official github repository of the Met dataset

The Met dataset This is the official github repository of the Met dataset. The official webpage of the dataset can be found here. What is it? This cod

Nikolaos-Antonios Ypsilantis 35 Dec 17, 2022
Course on computational design, non-linear optimization, and dynamics of soft systems at UIUC.

Computational Design and Dynamics of Soft Systems · This is a repository that contains the source code for generating the lecture notes, handouts, exe

Tejaswin Parthasarathy 4 Jul 21, 2022
Official implementation of "Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection" (ICCV Workshops 2021: RSL-CV).

Official PyTorch implementation of "Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection" This is the implementation of the paper "Syn

Marcella Astrid 11 Oct 07, 2022