NeuralForecast is a Python library for time series forecasting with deep learning models

Overview

Nixtla   Tweet  Slack

Neural 🧠 Forecast

Deep Learning for time series

CI Linux CI Mac codecov Python PyPi conda-nixtla License: GPLv3 docs

State-of-the-art time series forecasting for PyTorch.

NeuralForecast is a Python library for time series forecasting with deep learning models. It includes benchmark datasets, data-loading utilities, evaluation functions, statistical tests, univariate model benchmarks and SOTA models implemented in PyTorch and PyTorchLightning.

Getting started β€’ Installation β€’ Models

⚑ Why?

Accuracy:

  • Global model is fitted simultaneously for several time series.
  • Shared information helps with highly parametrized and flexible models.
  • Useful for items/skus that have little to no history available.

Efficiency:

  • Automatic featurization processes.
  • Fast computations (GPU or TPU).

πŸ“– Documentation

Here is a link to the documentation.

🧬 Getting Started Open In Colab

Example Jupyter Notebook

demo

πŸ’» Installation

PyPI

You can install the released version of NeuralForecast from the Python package index with:

pip install neuralforecast

(Installing inside a python virtualenvironment or a conda environment is recommended.)

Conda

Also you can install the released version of NeuralForecast from conda with:

conda install -c nixtla neuralforecast

(Installing inside a python virtualenvironment or a conda environment is recommended.)

Dev Mode If you want to make some modifications to the code and see the effects in real time (without reinstalling), follow the steps below:
git clone https://github.com/Nixtla/neuralforecast.git
cd neuralforecast
pip install -e .

Forecasting models

  • Neural Hierarchical Interpolation for Time Series Forecasting (N-HiTS: A new model for long-horizon forecasting which incorporates novel hierarchical interpolation and multi-rate data sampling techniques to specialize blocks of its architecture to different frequency band of the time-series signal. It achieves SoTA performance on several benchmark datasets, outperforming current Transformer-based models by more than 25%.

  • Exponential Smoothing Recurrent Neural Network (ES-RNN): A hybrid model that combines the expressivity of non linear models to capture the trends while it normalizes using a Holt-Winters inspired model for the levels and seasonals. This model is the winner of the M4 forecasting competition.

  • Neural Basis Expansion Analysis (N-BEATS): A model from Element-AI (Yoshua Bengio’s lab) that has proven to achieve state-of-the-art performance on benchmark large scale forecasting datasets like Tourism, M3, and M4. The model is fast to train and has an interpretable configuration.

  • Transformer-Based Models: Transformer-based framework for unsupervised representation learning of multivariate time series.
    • Autoformer: Encoder-decoder model with decomposition capabilities and an approximation to attention based on Fourier transform.
    • Informer: Transformer with MLP based multi-step prediction strategy, that approximates self-attention with sparsity.
    • Transformer: Classical vanilla Transformer.

πŸ“ƒ License

This project is licensed under the GPLv3 License - see the LICENSE file for details.

πŸ”¨ How to contribute

See CONTRIBUTING.md.

Contributors ✨

Thanks goes to these wonderful people (emoji key):


fede

πŸ’» πŸ› πŸ“–

Greg DeVos

πŸ€”

Cristian Challu

πŸ’»

mergenthaler

πŸ“– πŸ’»

Kin

πŸ’» πŸ› πŸ”£

JosΓ© Morales

πŸ’»

Alejandro

πŸ’»

stefanialvs

🎨

Ikko Ashimine

πŸ›

This project follows the all-contributors specification. Contributions of any kind welcome!

Comments
  • NBEATSx error on retrain, when callbacks defined

    NBEATSx error on retrain, when callbacks defined

    Describe the bug Retraining a pre-trained model with callbacks enabled results in error.

    Traceback (most recent call last):
      File "nf.py", line 145, in <module>
        nf.fit(df=train_df)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/neuralforecast/core.py", line 157, in fit
        model.fit(self.dataset, val_size=val_size)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/neuralforecast/common/_base_windows.py", line 493, in fit
        trainer.fit(self, datamodule=datamodule)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in fit
        self._call_and_handle_interrupt(
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 723, in _call_and_handle_interrupt
        return trainer_fn(*args, **kwargs)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl
        results = self._run(model, ckpt_path=self.ckpt_path)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1224, in _run
        self._log_hyperparams()
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1294, in _log_hyperparams
        logger.save()
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/pytorch_lightning/utilities/rank_zero.py", line 32, in wrapped_fn
        return fn(*args, **kwargs)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/pytorch_lightning/loggers/tensorboard.py", line 266, in save
        save_hparams_to_yaml(hparams_file, self.hparams)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 402, in save_hparams_to_yaml
        yaml.dump(v)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/__init__.py", line 290, in dump
        return dump_all([data], stream, Dumper=Dumper, **kwds)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/__init__.py", line 278, in dump_all
        dumper.represent(data)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 27, in represent
        node = self.represent_data(data)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 48, in represent_data
        node = self.yaml_representers[data_types[0]](self, data)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 199, in represent_list
        return self.represent_sequence('tag:yaml.org,2002:seq', data)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 92, in represent_sequence
        node_item = self.represent_data(item)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 52, in represent_data
        node = self.yaml_multi_representers[data_type](self, data)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 342, in represent_object
        return self.represent_mapping(
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 118, in represent_mapping
        node_value = self.represent_data(item_value)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 52, in represent_data
        node = self.yaml_multi_representers[data_type](self, data)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 346, in represent_object
        return self.represent_sequence(tag+function_name, args)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 92, in represent_sequence
        node_item = self.represent_data(item)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 52, in represent_data
        node = self.yaml_multi_representers[data_type](self, data)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 342, in represent_object
        return self.represent_mapping(
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 118, in represent_mapping
        node_value = self.represent_data(item_value)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 52, in represent_data
        node = self.yaml_multi_representers[data_type](self, data)
      File "/home/.../miniforge3/envs/.../lib/python3.8/site-packages/yaml/representer.py", line 330, in represent_object
        dictitems = dict(dictitems)
    ValueError: dictionary update sequence element #0 has length 1; 2 is required
    

    To Reproduce Steps to reproduce the behavior:

    1. Train the initial model for x epochs and save it
        df = pd.read_csv(data_fp, parse_dates=['ds'])
        train_df, val_df = df[:-validate_holdout], df[-validate_holdout:]
    
        loss = QuantileLoss(q=0.5)
    
        early_stop_callback = EarlyStopping(
            monitor="train_loss_epoch",
            min_delta=0.00001,
            patience=1500,
            mode="min",
            verbose=False
        )
    
        callbacks = [
            early_stop_callback,
        ]
    
        models = [
            NBEATSx(
                h=horizon,
                input_size=input_size,
                futr_exog_list=future_exog,  # <- Future exogenous variables
                hist_exog_list=historic_exog,  # <- Historical exogenous variables
    
                learning_rate=1e-5,
                batch_size=32,
                windows_batch_size=1024,
                max_epochs=40,
    
                n_harmonics=8, 
                n_polynomials=6,  
                stack_types=["identity", "trend", "seasonality"],
                n_blocks=[1, 1, 1],
                mlp_units=[[32, 32], [32, 32], [32, 32]],
    
                activation="ReLU",
                shared_weights=False,
                loss=loss,
                scaler_type="robust",
    
                # trainer args
                callbacks=callbacks,
                num_lr_decays=0,
            )
        ]
    
        nf = NeuralForecast(models=models, freq='H')
        nf.fit(df=train_df)
        nf.save('bbbb', overwrite=True)
    
    1. Load and attempt to retrain the model (change last three lines to ...)
        nf = NeuralForecast.load('bbbb')
        nf.fit(df=train_df)
        nf.save('bbbb', overwrite=True)
    

    After step 2, you will receive the above error.

    1. Retry steps 1 & 2, commenting out the callbacks=callbacks argument passed to NBEATSx. It should save, load, and retrain without error. Only when including the callbacks to the model definition does it fail to retrain after save/load.

    Expected behavior The model should be re-trainable after loading.

    Desktop (please complete the following information):

    • OS: Ubuntu MATE 20.04 x64
    • Python 3.8.13
    • NeuralForecast 1.3.0
    opened by MC-Dave 9
  • ValueError: Trial returned a result which did not include the specified metric(s) `loss` that `tune.TuneConfig()` expects. while

    ValueError: Trial returned a result which did not include the specified metric(s) `loss` that `tune.TuneConfig()` expects. while

    Does anybody get "ValueError: Trial returned a result which did not include the specified metric(s) loss that tune.TuneConfig() expects. while " while using the notebook of LongHorizon_with_NHITS.ipynb?

    bug 
    opened by xiao-he 9
  • Models missing possibility of multivariate target variable

    Models missing possibility of multivariate target variable

    I wanted to suggst a new feature: having speperate input and output arrays for the models instead of the dataframe. This would give more possibilities in data preprocessing. Thanks, this is an amasing repository!

    opened by max6457 8
  • Installation error for neuralforecast

    Installation error for neuralforecast

    I used the following command to download the library and start using it and compare its results to DeepAr and Neural Prophet:

    !pip install neuralforecast and I got the following error:

    ERROR: Could not find a version that satisfies the requirement neuralforecast (from versions: none) ERROR: No matching distribution found for neuralforecast

    opened by Msaleh87 8
  • Almost no convergence on Bitcoin Price Data

    Almost no convergence on Bitcoin Price Data

    Two questions

    • I tried training on BTC data but the model doesn't learn much (MSE=7233). Is there a fundamental mistake in my approach or is nhits not the right tool for this task? By comparison, LSTM achieved a MSE of 0.24655 on the same dataset.

    • What does y_hat and y_true mean exactly and how are they calculated? What is their function since they always seem to have the same values?

    Model is set to horizon=1 as we need to predict one t+1 interval, 5m interval in this case. I assume this is correct model = nf.auto.NHITS(horizon=1)

    Here is the entire notebook with my code

    image

    image

    opened by Karlheinzniebuhr 8
  • Question on WindowsDataset / TimeSeriesLoader

    Question on WindowsDataset / TimeSeriesLoader

    I'm keen to use nueralforecast for my own work. I'm interested in anomaly detection which is slightly different to forecasting - generally trying to reconstruct a window rather than forecasting the next n points but I think I can make it work.

    I'm a little confused around the TimeSeriesLoader though, I've been working through my own example but I'll refer to the getting started notebook to make it easier.

    The WindowsDataset is indexed by the number of unique_ids in the original dataset.

    train_dataset = nf.data.tsdataset.WindowsDataset(
        Y_df=Y_df_train, 
        X_df=X_df_train,
        f_cols=[f'ex_{i}' for i in range(1, 5)],
        input_size=input_size,
        output_size=output_size,
        mask_df=train_mask_df
    )
    

    So in this case there are 7 series, each containing 744 windows of length 144 i.e.

    len(train_dataset) == 7
    train_dataset[0]['Y'].shape == [774, 144]
    

    Where I'm really confused is the TimeSeriesLoader, I assumed when you set the batch_size=32 then I expect 7 * 774 // 32 batches of data per epoch, each having shape [32,144] but no matter what parameters I try, I only seem to get a single batch that's either of size batch_size or n_windows, i.e.

    train_loader = nf.data.tsloader.TimeSeriesLoader(train_dataset, batch_size=32, eq_batch_size=True, shuffle=True)
    for batch in train_loader:
        print(batch['Y'].shape)
    

    torch.Size([32, 144])

    It only returns a single batch?

    I'm expecting something like the following

    import torch
    from torch.utils.data import DataLoader, TensorDataset
    
    # dummy tensor with fiorst dim being number of series x number of windows per series
    train_dataset = TensorDataset(torch.zeros(size=(774*7,144)))
    train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
    
    for batch in train_loader:
        print(batch[0].shape)
    

    torch.Size([32, 144]) torch.Size([32, 144]) torch.Size([32, 144]) ... torch.Size([32, 144])

    Is there something wrong here, I want to train a model that has multiple long (1 year) series with 15T frequency and 1 day windows. I'm confused as to why the DataLoader only returns one batch per epoch?

    opened by david-waterworth 7
  • example for `TimeSeriesDataset`

    example for `TimeSeriesDataset`

    Hello,

    I'm trying to use the TFT model on my custom dataset. For that, I created a custom pytorch dataloader and when I try to call the fit method on it, it tells me that it got an unexpected type.

    Here is the construction of my dataloader

    class BSM2Dataset(Dataset):
      """BSM2 dataset."""
    
      def __init__(self):
        # self.landmarks_frame = pd.read_csv(csv_file)
        self.inf = pd.read_csv('/content/bsm2_influent.csv').values
        self.eff = pd.read_csv('/content/bsm2_effluent.csv').values
        self.X, self.Y = self.split_series(1, 1)
    
      def __len__(self):
        return len(self.inf)
    
      def split_series(self, n_past, n_future):
        X, y = list(), list()
        for window_start in range(len(self.inf)):
          past_end = window_start + n_past
          future_end = past_end + n_future
          if future_end > len(self.inf):
            break
          # slicing the past and future parts of the window
          past, future = self.inf[window_start:past_end, :], self.eff[past_end - 1:future_end -1, :]
          X.append(past)
          y.append(future)
        return np.array(X), np.array(y)
    
      def __getitem__(self, idx):
        if torch.is_tensor(idx):
          idx = idx.tolist()
    
        return self.X[idx], self.Y[idx]
    

    Here the call to the fit method

    foo = BSM2Dataset()
    model = TFT(h=12,
                    input_size=48,
                    hidden_size=100,
                    stat_exog_list=['airline1'],
                    hist_exog_list=['y_[lag12]'],
                    futr_exog_list=['trend'],
                    max_epochs=300,
                    learning_rate=0.01,
                    scaler_type='robust',
                    loss=MQLoss(level=[80, 90]),
                    windows_batch_size=None,
                    enable_progress_bar=True)
    model.fit(foo)
    

    The call fails and here is the entire stacktrace

    ---------------------------------------------------------------------------
    TypeError                                 Traceback (most recent call last)
    [<ipython-input-14-a8a91adbe35a>](https://localhost:8080/#) in <module>
    ----> 1 model.fit(foo)
    
    18 frames
    [/usr/local/lib/python3.7/dist-packages/neuralforecast/common/_base_windows.py](https://localhost:8080/#) in fit(self, dataset, val_size, test_size)
        399 
        400         trainer = pl.Trainer(**self.trainer_kwargs)
    --> 401         trainer.fit(self, datamodule=datamodule)
        402 
        403     def predict(self, dataset, test_size=None, step_size=1, **data_module_kwargs):
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py](https://localhost:8080/#) in fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
        769         self.strategy.model = model
        770         self._call_and_handle_interrupt(
    --> 771             self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
        772         )
        773 
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py](https://localhost:8080/#) in _call_and_handle_interrupt(self, trainer_fn, *args, **kwargs)
        721                 return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
        722             else:
    --> 723                 return trainer_fn(*args, **kwargs)
        724         # TODO: treat KeyboardInterrupt as BaseException (delete the code below) in v1.7
        725         except KeyboardInterrupt as exception:
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py](https://localhost:8080/#) in _fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
        809             ckpt_path, model_provided=True, model_connected=self.lightning_module is not None
        810         )
    --> 811         results = self._run(model, ckpt_path=self.ckpt_path)
        812 
        813         assert self.state.stopped
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py](https://localhost:8080/#) in _run(self, model, ckpt_path)
       1234         self._checkpoint_connector.resume_end()
       1235 
    -> 1236         results = self._run_stage()
       1237 
       1238         log.detail(f"{self.__class__.__name__}: trainer tearing down")
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py](https://localhost:8080/#) in _run_stage(self)
       1321         if self.predicting:
       1322             return self._run_predict()
    -> 1323         return self._run_train()
       1324 
       1325     def _pre_training_routine(self):
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py](https://localhost:8080/#) in _run_train(self)
       1343 
       1344         with isolate_rng():
    -> 1345             self._run_sanity_check()
       1346 
       1347         # enable train mode
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py](https://localhost:8080/#) in _run_sanity_check(self)
       1411             # run eval step
       1412             with torch.no_grad():
    -> 1413                 val_loop.run()
       1414 
       1415             self._call_callback_hooks("on_sanity_check_end")
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/base.py](https://localhost:8080/#) in run(self, *args, **kwargs)
        202             try:
        203                 self.on_advance_start(*args, **kwargs)
    --> 204                 self.advance(*args, **kwargs)
        205                 self.on_advance_end()
        206                 self._restarting = False
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py](https://localhost:8080/#) in advance(self, *args, **kwargs)
        153         if self.num_dataloaders > 1:
        154             kwargs["dataloader_idx"] = dataloader_idx
    --> 155         dl_outputs = self.epoch_loop.run(self._data_fetcher, dl_max_batches, kwargs)
        156 
        157         # store batch level output per dataloader
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/base.py](https://localhost:8080/#) in run(self, *args, **kwargs)
        202             try:
        203                 self.on_advance_start(*args, **kwargs)
    --> 204                 self.advance(*args, **kwargs)
        205                 self.on_advance_end()
        206                 self._restarting = False
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py](https://localhost:8080/#) in advance(self, data_fetcher, dl_max_batches, kwargs)
        110         if not isinstance(data_fetcher, DataLoaderIterDataFetcher):
        111             batch_idx = self.batch_progress.current.ready
    --> 112             batch = next(data_fetcher)
        113         else:
        114             batch_idx, batch = next(data_fetcher)
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/fetching.py](https://localhost:8080/#) in __next__(self)
        182 
        183     def __next__(self) -> Any:
    --> 184         return self.fetching_function()
        185 
        186     def reset(self) -> None:
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/fetching.py](https://localhost:8080/#) in fetching_function(self)
        257             # this will run only when no pre-fetching was done.
        258             try:
    --> 259                 self._fetch_next_batch(self.dataloader_iter)
        260                 # consume the batch we just fetched
        261                 batch = self.batches.pop(0)
    
    [/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/fetching.py](https://localhost:8080/#) in _fetch_next_batch(self, iterator)
        271     def _fetch_next_batch(self, iterator: Iterator) -> None:
        272         start_output = self.on_fetch_start()
    --> 273         batch = next(iterator)
        274         self.fetched += 1
        275         if not self.prefetch_batches and self._has_len:
    
    [/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in __next__(self)
        679                 # TODO(https://github.com/pytorch/pytorch/issues/76750)
        680                 self._reset()  # type: ignore[call-arg]
    --> 681             data = self._next_data()
        682             self._num_yielded += 1
        683             if self._dataset_kind == _DatasetKind.Iterable and \
    
    [/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in _next_data(self)
        719     def _next_data(self):
        720         index = self._next_index()  # may raise StopIteration
    --> 721         data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
        722         if self._pin_memory:
        723             data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
    
    [/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py](https://localhost:8080/#) in fetch(self, possibly_batched_index)
         50         else:
         51             data = self.dataset[possibly_batched_index]
    ---> 52         return self.collate_fn(data)
    
    [/usr/local/lib/python3.7/dist-packages/neuralforecast/tsdataset.py](https://localhost:8080/#) in _collate_fn(self, batch)
         61                         temporal_cols = elem['temporal_cols'])
         62 
    ---> 63         raise TypeError(f'Unknown {elem_type}')
         64 
         65 # %% ../nbs/tsdataset.ipynb 7
    
    TypeError: Unknown <class 'tuple'>
    

    I tried searching on the site and I couldn't find an example of the TimeSeriesDataset being used as an example. Would it possible to have a tutorial nb which shows how to do this?

    Thanks!

    opened by deven-gqc 6
  • 'Mismatch in X, Y ds' when applying NBEATSx.forecast method

    'Mismatch in X, Y ds' when applying NBEATSx.forecast method

    Hi, any idea to solve this issue. I used custom dataset and tried to mimic this code but it return error as follow. this is the data

    ---------------------------------------------------------------------------
    AssertionError                            Traceback (most recent call last)
    Input In [20], in <cell line: 2>()
          1 model.return_decomposition = False
    ----> 2 forecast_df = model.forecast(Y_df=Y_forecast_df, X_df=X_forecast_df, S_df=S_df, batch_size=2)
          3 forecast_df
    
    Input In [10], in forecast(self, Y_df, X_df, S_df, batch_size, trainer, verbose)
         39 Y_df = Y_df.append(forecast_df).sort_values(['unique_id','ds']).reset_index(drop=True)
         41 # Dataset, loader and trainer
    ---> 42 dataset = WindowsDataset(S_df=S_df, Y_df=Y_df, X_df=X_df,
         43                             mask_df=None, f_cols=[],
         44                             input_size=self.n_time_in,
         45                             output_size=self.n_time_out,
         46                             sample_freq=1,
         47                             complete_windows=True,
         48                             ds_in_test=self.n_time_out,
         49                             is_test=True,
         50                             verbose=verbose)
         52 loader = TimeSeriesLoader(dataset=dataset,
         53                             batch_size=batch_size,
         54                             shuffle=False)
         56 if trainer is None:
    
    File ~\Anaconda3\envs\SiT\lib\site-packages\neuralforecast\data\tsdataset.py:636, in WindowsDataset.__init__(self, Y_df, input_size, output_size, X_df, S_df, f_cols, mask_df, ds_in_test, is_test, sample_freq, complete_windows, last_window, verbose)
        590 def __init__(self,
        591              Y_df: pd.DataFrame,
        592              input_size: int,
       (...)
        602              last_window: bool = False,
        603              verbose: bool = False) -> 'TimeSeriesDataset':
        604     """
        605     Parameters
        606     ----------
       (...)
        634         Wheter or not log outputs.
        635     """
    --> 636     super(WindowsDataset, self).__init__(Y_df=Y_df, input_size=input_size,
        637                                          output_size=output_size,
        638                                          X_df=X_df, S_df=S_df, f_cols=f_cols,
        639                                          mask_df=mask_df, ds_in_test=ds_in_test,
        640                                          is_test=is_test, complete_windows=complete_windows,
        641                                          verbose=verbose)
        642     # WindowsDataset parameters
        643     self.windows_size = self.input_size + self.output_size
    
    File ~\Anaconda3\envs\SiT\lib\site-packages\neuralforecast\data\tsdataset.py:110, in BaseDataset.__init__(self, Y_df, X_df, S_df, f_cols, mask_df, ds_in_test, is_test, input_size, output_size, complete_windows, verbose)
        106     dataset_info += f'Outsample percentage={out_prc}, \t{n_out} time stamps \n'
        107     logging.info(dataset_info)
        109 self.ts_data, self.s_matrix, self.meta_data, self.t_cols, self.s_cols \
    --> 110                  = self._df_to_lists(Y_df=Y_df, S_df=S_df, X_df=X_df, mask_df=mask_df)
        112 # Dataset attributes
        113 self.n_series = len(self.ts_data)
    
    File ~\Anaconda3\envs\SiT\lib\site-packages\neuralforecast\data\tsdataset.py:201, in _df_to_lists(self, S_df, Y_df, X_df, mask_df)
        198 M = mask_df.sort_values(by=['unique_id', 'ds'], ignore_index=True).copy()
        200 assert np.array_equal(X.unique_id.values, Y.unique_id.values), f'Mismatch in X, Y unique_ids'
    --> 201 assert np.array_equal(X.ds.values, Y.ds.values), f'Mismatch in X, Y ds'
        202 assert np.array_equal(M.unique_id.values, Y.unique_id.values), f'Mismatch in M, Y unique_ids'
        203 assert np.array_equal(M.ds.values, Y.ds.values), f'Mismatch in M, Y ds'
    
    AssertionError: Mismatch in X, Y ds
    
    opened by ramdhan1989 6
  • how to use continuous exogenous variable in the future for forecasting problem

    how to use continuous exogenous variable in the future for forecasting problem

    Hi, I need to forecast a target variable and I have two time series variable as continuous exog variables that can be used to forecast target series for multistep ahead. I can access the future values of exog variable and for production purpose I would like to simulate the impact of exog variable in the future to the forecasted target. So, I need to use exog at time t+1 to forecast target at time t+1 and so on. can I do that?

    thank you

    question 
    opened by ramdhan1989 6
  • TFT-GMM - No S Matrix?

    TFT-GMM - No S Matrix?

    TFT-GMM is listed as a hierarchical methodology in the documentation, though I don't see a summation matrix parameter as in Hierarchical DeepVAR. I assume that it's not hierarchical then in the sense of learning hierarchical reconciliation as in Hierarchical DeepVAR?

    Cheers, Eric

    opened by esbraun 5
  • broken example for `TFT`

    broken example for `TFT`

    Hello,

    I was trying to try the demo code for the Temporal Fusion Transformer and the example given in the docs is broken.

    1. There is a missing import from neuralforecast.models import TFT
    2. I'm unable to call the fit method as the function does take the static_df argument. I checked the source code to see if the argument was renamed, but unfortunately that isn't the case.

    image

    image

    Thanks!

    opened by deven-gqc 4
  • [FEAT] Missing AutoNBEATSx model.

    [FEAT] Missing AutoNBEATSx model.

    The simple NBEATS model has already its Auto version. The NBEATSx model is missing one.

    For the moment one can recover the AutoNBEATSx behavior through AutoNHITS by setting n_freq_downsample to a list of ones.

    enhancement 
    opened by kdgutier 0
  • [FEAT] NeuralForecast.forecast method is missing

    [FEAT] NeuralForecast.forecast method is missing

    StatsForecast core class has a very useful .forecast method: https://nixtla.github.io/statsforecast/core.html

    Currently NeuralForecast core class can mimic its behavior by calling .predict method without additional arguments.

    It would be convenient to homogenize the methods and add a NeuralForecast.forecast method too.

    enhancement good first issue 
    opened by kdgutier 0
  • [BUG] environment.yml conda installation fails to locate pre installed CUDA path

    [BUG] environment.yml conda installation fails to locate pre installed CUDA path

    It seems that a naive conda installation on top of a server with CUDA pre-installed fails to locate its path correctly.

    conda env create -f environment.yml
    

    Previous manual installations like this https://github.com/cchallu/nbeatsx/blob/main/setup.sh Correctly installs PyTorch with CUDA dependencies.

    The following installation works correctly with CUDA pre-installed

    pip install neuralforecast
    

    A careful look into environment.yml is needed.

    opened by kdgutier 0
  • Binary Classification as final output

    Binary Classification as final output

    I have a use case wherein I need to take multi-variate time-series data as input, find correlations, and predict an output. In my particular case, I only need the output to be a 0 or 1. 0 meaning down, and 1 meaning up.

    NeuralForecast models appear to be designed to forecast an unbounded value(s). Instead of having a fixed label applied to each observation in the dataset, the horizon parameter is used to determine the x-shift to produce the y labels at runtime.

    Is it possible to update the output so that it gives a value between 0 and 1, or each predicted time period?

    Thank you for your time! NeuralForecast is an excellent package

    opened by MC-Dave 2
  • `.predict` method missing exogenous variables

    `.predict` method missing exogenous variables

    After declaring and fitting a NeuralForecast method with exogenous variables, during predict if the exogenous variables are not send as input. The method returns nans.

    We need to add a RaiseException that checks the inputs of the models. Maybe even an exception that checks for nans the input pandas dataframes.

    bug 
    opened by kdgutier 0
  • Releases(v1.3.0)
    • v1.3.0(Dec 15, 2022)

      What's Changed

      • [DOCS] Probabilistic Long-horizon forecasting in https://github.com/Nixtla/neuralforecast/pull/361
      • [FEAT]: Updated GMM Class in losses.pytorch in https://github.com/Nixtla/neuralforecast/pull/365
      • [FEAT] Scale decoupling changes for GMM and PMM class in https://github.com/Nixtla/neuralforecast/pull/366
      • [FEAT] AutoTFT in https://github.com/Nixtla/neuralforecast/pull/367
      • [FIX] Losses in Auto models initialization in https://github.com/Nixtla/neuralforecast/pull/369

      Full Changelog: https://github.com/Nixtla/neuralforecast/compare/v1.2.0...v1.3.0

      Source code(tar.gz)
      Source code(zip)
    • v1.2.0(Dec 7, 2022)

      What's Changed

      • [FIX] Colab link getting started in https://github.com/Nixtla/neuralforecast/pull/329
      • Improved MQ-NBEATS [B,H]+[B,H,Q] -> [B,H,Q] in https://github.com/Nixtla/neuralforecast/pull/330
      • Improved MQ-NBEATSx [B,H]+[B,H,Q] -> [B,H,Q] in https://github.com/Nixtla/neuralforecast/pull/331
      • fixed pytorch losses' init documentation in https://github.com/Nixtla/neuralforecast/pull/333
      • TCN in https://github.com/Nixtla/neuralforecast/pull/332
      • Update README.md in https://github.com/Nixtla/neuralforecast/pull/335
      • [FEAT] DistributionLoss in https://github.com/Nixtla/neuralforecast/pull/339
      • [FEAT] Deprecated GMMTFT in favor of DistributionLoss' modularity in https://github.com/Nixtla/neuralforecast/pull/342
      • [Feat] Scaled Distributions in https://github.com/Nixtla/neuralforecast/pull/345
      • Deprecate AffineTransformed class in https://github.com/Nixtla/neuralforecast/pull/350
      • [FEAT] Add cla action in https://github.com/Nixtla/neuralforecast/pull/349
      • [FIX] Delete cla.yml in https://github.com/Nixtla/neuralforecast/pull/353
      • [FIX] CI tests in https://github.com/Nixtla/neuralforecast/pull/357
      • [FEAT] Added return_params to Distributions in https://github.com/Nixtla/neuralforecast/pull/348
      • [FEAT] Ignore jupyter notebooks as part of languages in https://github.com/Nixtla/neuralforecast/pull/355
      • [FEAT] Added num_samples to Distribution's initialization in https://github.com/Nixtla/neuralforecast/pull/359

      Full Changelog: https://github.com/Nixtla/neuralforecast/compare/v1.1.0...v1.2.0

      Source code(tar.gz)
      Source code(zip)
    • v1.1.0(Nov 9, 2022)

      What's Changed

      • [FIX] Update license in https://github.com/Nixtla/neuralforecast/pull/285
      • [FEAT] Exogenous variables in https://github.com/Nixtla/neuralforecast/pull/286
      • scalers class in https://github.com/Nixtla/neuralforecast/pull/288
      • General documentation improvements in https://github.com/Nixtla/neuralforecast/pull/287
      • Fixed README links and added SoTA runs to examples in https://github.com/Nixtla/neuralforecast/pull/291
      • Improved documentation in https://github.com/Nixtla/neuralforecast/pull/293
      • Improved documentation in https://github.com/Nixtla/neuralforecast/pull/297
      • Improved RNN-based/BaseRecurrent/Windows in https://github.com/Nixtla/neuralforecast/pull/298
      • Save load in https://github.com/Nixtla/neuralforecast/pull/292
      • Fix normalizers in https://github.com/Nixtla/neuralforecast/pull/301
      • Improved example notebooks, changed numeration in https://github.com/Nixtla/neuralforecast/pull/302
      • Static variables in https://github.com/Nixtla/neuralforecast/pull/305
      • Turned Pytorch losses into torch.nn.module classes in https://github.com/Nixtla/neuralforecast/pull/311
      • Changed enable_checkpointing to False default in https://github.com/Nixtla/neuralforecast/pull/309
      • Correct main link pointers in https://github.com/Nixtla/neuralforecast/pull/314
      • Rnns normalizers in https://github.com/Nixtla/neuralforecast/pull/316
      • rnns with decoders, autos, usage examples, fix val in https://github.com/Nixtla/neuralforecast/pull/320
      • Improved documentation3 in https://github.com/Nixtla/neuralforecast/pull/322
      • getting started with LSTM and NHITS in https://github.com/Nixtla/neuralforecast/pull/323
      • recovered MQ-NHITS in https://github.com/Nixtla/neuralforecast/pull/327

      Full Changelog: https://github.com/Nixtla/neuralforecast/compare/v1.0.0...v1.1.0

      Source code(tar.gz)
      Source code(zip)
    • v1.0.0(Oct 4, 2022)

      What's Changed

      • [BREAKING CHANGE] NeuralForecast Refactor https://github.com/Nixtla/neuralforecast/pull/281
      • [FIX] Nbdev docs https://github.com/Nixtla/neuralforecast/pull/282
      • [FEAT] Add examples in https://github.com/Nixtla/neuralforecast/pull/283

      Full Changelog: https://github.com/Nixtla/neuralforecast/compare/v0.1.0...v1.0.0

      Source code(tar.gz)
      Source code(zip)
    • v0.1.0(Jun 2, 2022)

      What's Changed

      • Added Loss Function & Rewrote Unit Testing by @shibzhou in https://github.com/Nixtla/neuralforecast/pull/238
      • fix reshapes and rnn by @cchallu in https://github.com/Nixtla/neuralforecast/pull/247
      • mqnhits by @cchallu in https://github.com/Nixtla/neuralforecast/pull/248
      • y to device by @cchallu in https://github.com/Nixtla/neuralforecast/pull/249
      • Update LICENSE to MIT by @FedericoGarza in https://github.com/Nixtla/neuralforecast/pull/251

      New Contributors

      • @shibzhou made their first contribution in https://github.com/Nixtla/neuralforecast/pull/238

      Full Changelog: https://github.com/Nixtla/neuralforecast/compare/v0.0.9...v0.1.0

      Source code(tar.gz)
      Source code(zip)
    • v0.0.9(Apr 21, 2022)

      What's Changed

      • Added unit tests for numpy and pytorch losses by @kdgutier in https://github.com/Nixtla/neuralforecast/pull/232
      • fix/api auto by @FedericoGarza in https://github.com/Nixtla/neuralforecast/pull/234

      Full Changelog: https://github.com/Nixtla/neuralforecast/compare/v0.0.8...v0.0.9

      Source code(tar.gz)
      Source code(zip)
    • v0.0.8(Apr 19, 2022)

      What's Changed

      • Readme with colab by @mergenthaler in https://github.com/Nixtla/neuralforecast/pull/204
      • Utils debug by @kdgutier in https://github.com/Nixtla/neuralforecast/pull/202
      • fix scalers assert by @cchallu in https://github.com/Nixtla/neuralforecast/pull/207
      • Old auto nhits by @kdgutier in https://github.com/Nixtla/neuralforecast/pull/209
      • Fixed equality of masked mqloss and MQLoss by @kdgutier in https://github.com/Nixtla/neuralforecast/pull/215
      • TourismL hierarchical dataset by @kdgutier in https://github.com/Nixtla/neuralforecast/pull/216
      • feat: add workflow for pip by @FedericoGarza in https://github.com/Nixtla/neuralforecast/pull/218
      • build(deps): bump nokogiri from 1.12.5 to 1.13.4 in /docs by @dependabot in https://github.com/Nixtla/neuralforecast/pull/219
      • fix: remove unused gem files by @FedericoGarza in https://github.com/Nixtla/neuralforecast/pull/224
      • autonf class by @cchallu in https://github.com/Nixtla/neuralforecast/pull/225
      • fix: remove legacy module by @FedericoGarza in https://github.com/Nixtla/neuralforecast/pull/223
      • fix: update conda-forge references by @FedericoGarza in https://github.com/Nixtla/neuralforecast/pull/226
      • fix: order of uid, ds cols by @FedericoGarza in https://github.com/Nixtla/neuralforecast/pull/228
      • Fix nbdev version by @FedericoGarza in https://github.com/Nixtla/neuralforecast/pull/227
      • fix: numpy smape by @FedericoGarza in https://github.com/Nixtla/neuralforecast/pull/229

      Full Changelog: https://github.com/Nixtla/neuralforecast/compare/v0.0.7...v0.0.8

      Source code(tar.gz)
      Source code(zip)
    • v0.0.7(Mar 19, 2022)

    Owner
    Nixtla
    Open Source Time Series Forecasting
    Nixtla
    Classification of EEG data using Deep Learning

    Graduation-Project Classification of EEG data using Deep Learning Epilepsy is the most common neurological disease in the world. Epilepsy occurs as a

    Osman AlpaydΔ±n 5 Jun 24, 2022
    HINet: Half Instance Normalization Network for Image Restoration

    HINet: Half Instance Normalization Network for Image Restoration Liangyu Chen, Xin Lu, Jie Zhang, Xiaojie Chu, Chengpeng Chen Paper: https://arxiv.org

    303 Dec 31, 2022
    Certifiable Outlier-Robust Geometric Perception

    Certifiable Outlier-Robust Geometric Perception About This repository holds the implementation for certifiably solving outlier-robust geometric percep

    83 Dec 31, 2022
    Simple image captioning model - CLIP prefix captioning.

    Simple image captioning model - CLIP prefix captioning.

    688 Jan 04, 2023
    BMN: Boundary-Matching Network

    BMN: Boundary-Matching Network A pytorch-version implementation codes of paper: "BMN: Boundary-Matching Network for Temporal Action Proposal Generatio

    qinxin 260 Dec 06, 2022
    Official repository of Semantic Image Matting

    Semantic Image Matting This is the official repository of Semantic Image Matting (CVPR2021). Overview Natural image matting separates the foreground f

    192 Dec 29, 2022
    K Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching (To appear in RA-L 2022)

    KCP The official implementation of KCP: k Closest Points and Maximum Clique Pruning for Efficient and Effective 3D Laser Scan Matching, accepted for p

    Yu-Kai Lin 109 Dec 14, 2022
    A scientific and useful toolbox, which contains practical and effective long-tail related tricks with extensive experimental results

    Bag of tricks for long-tailed visual recognition with deep convolutional neural networks This repository is the official PyTorch implementation of AAA

    Yong-Shun Zhang 181 Dec 28, 2022
    Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)

    Distributed Deep Learning in Open Collaborations This repository contains the code for the NeurIPS 2021 paper "Distributed Deep Learning in Open Colla

    Yandex Research 96 Sep 15, 2022
    Go from graph data to a secure and interactive visual graph app in 15 minutes. Batteries-included self-hosting of graph data apps with Streamlit, Graphistry, RAPIDS, and more!

    βœ”οΈ Linux βœ”οΈ OS X ❌ Windows (#39) Welcome to graph-app-kit Turn your graph data into a secure and interactive visual graph app in 15 minutes! Why This

    Graphistry 107 Jan 02, 2023
    PyTorch implementation of the paper: Long-tail Learning via Logit Adjustment

    logit-adj-pytorch PyTorch implementation of the paper: Long-tail Learning via Logit Adjustment This code implements the paper: Long-tail Learning via

    Chamuditha Jayanga 53 Dec 23, 2022
    Open source implementation of AceNAS: Learning to Rank Ace Neural Architectures with Weak Supervision of Weight Sharing

    AceNAS This repo is the experiment code of AceNAS, and is not considered as an official release. We are working on integrating AceNAS as a built-in st

    Yuge Zhang 6 Sep 07, 2022
    Deeper DCGAN with AE stabilization

    AEGeAN Deeper DCGAN with AE stabilization Parallel training of generative adversarial network as an autoencoder with dedicated losses for each stage.

    Tyler Kvochick 36 Feb 17, 2022
    Clinica is a software platform for clinical research studies involving patients with neurological and psychiatric diseases and the acquisition of multimodal data

    Clinica Software platform for clinical neuroimaging studies Homepage | Documentation | Paper | Forum | See also: AD-ML, AD-DL ClinicaDL About The Proj

    ARAMIS Lab 165 Dec 29, 2022
    DeepMetaHandles: Learning Deformation Meta-Handles of 3D Meshes with Biharmonic Coordinates

    DeepMetaHandles (CVPR2021 Oral) [paper] [animations] DeepMetaHandles is a shape deformation technique. It learns a set of meta-handles for each given

    Liu Minghua 73 Dec 15, 2022
    KinectFusion implemented in Python with PyTorch

    KinectFusion implemented in Python with PyTorch This is a lightweight Python implementation of KinectFusion. All the core functions (TSDF volume, fram

    Jingwen Wang 80 Jan 03, 2023
    CUAD

    Contract Understanding Atticus Dataset This repository contains code for the Contract Understanding Atticus Dataset (CUAD), a dataset for legal contra

    The Atticus Project 273 Dec 17, 2022
    DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe.

    DeepLab Introduction DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe. It combines densely-compute

    Ali 234 Nov 14, 2022
    Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO)

    V-MPO Simple code to demonstrate Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) in Pyt

    Nugroho Dewantoro 9 Jun 06, 2022
    Dilated Convolution with Learnable Spacings PyTorch

    Dilated-Convolution-with-Learnable-Spacings-PyTorch Ismail Khalfaoui Hassani Dilated Convolution with Learnable Spacings (abbreviated to DCLS) is a no

    15 Dec 09, 2022