A Python library for Deep Graph Networks

Related tags

Deep LearningPyDGN
Overview

PyDGN

Wiki

Description

This is a Python library to easily experiment with Deep Graph Networks (DGNs). It provides automatic management of data splitting, loading and the most common experimental settings. It also handles both model selection and risk assessment procedures, by trying many different configurations in parallel (CPU or GPU). This repository is built upon the Pytorch Geometric Library, which provides support for data management.

If you happen to use or modify this code, please remember to cite our tutorial paper:

Bacciu Davide, Errica Federico, Micheli Alessio, Podda Marco: A Gentle Introduction to Deep Learning for Graphs, Neural Networks, 2020. DOI: 10.1016/j.neunet.2020.06.006.

If you are interested in a rigorous evaluation of Deep Graph Networks, check this out:

Errica Federico, Podda Marco, Bacciu Davide, Micheli Alessio: A Fair Comparison of Graph Neural Networks for Graph Classification. Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). Code

Installation:

(We assume git and Miniconda/Anaconda are installed)

First, make sure gcc 5.2.0 is installed: conda install -c anaconda libgcc=5.2.0. Then, echo $LD_LIBRARY_PATH should always contain :/home/[your user name]/miniconda3/lib. Then run from your terminal the following command:

source setup/install.sh [<your_cuda_version>]
pip install pydgn

Where <your_cuda_version> is an optional argument that can be either cpu, cu102 or cu111 for Pytorch >= 1.8.0. If you do not provide a cuda version, the script will default to cpu. The script will create a virtual environment named pydgn, with all the required packages needed to run our code. Important: do NOT run this command using bash instead of source!

Remember that PyTorch MacOS Binaries dont support CUDA, install from source if CUDA is needed

Usage:

Preprocess your dataset (see also Wiki)

python build_dataset.py --config-file [your data config file]

Exampla

python build_dataset.py --config-file DATA_CONFIGS/config_PROTEINS.yml 

Launch an experiment in debug mode (see also Wiki)

python launch_experiment.py --config-file [your exp. config file] --splits-folder [the splits MAIN folder] --data-splits [the splits file] --data-root [root folder of your data] --dataset-name [name of the dataset] --dataset-class [class that handles the dataset] --max-cpus [max cpu parallelism] --max-gpus [max gpu parallelism] --gpus-per-task [how many gpus to allocate for each job] --final-training-runs [how many final runs when evaluating on test. Results are averaged] --result-folder [folder where to store results]

Example (GPU required)

python launch_experiment.py --config-file MODEL_CONFIGS/config_SupToyDGN_RandomSearch.yml --splits-folder DATA_SPLITS/CHEMICAL/ --data-splits DATA_SPLITS/CHEMICAL/PROTEINS/PROTEINS_outer10_inner1.splits --data-root DATA --dataset-name PROTEINS --dataset-class pydgn.data.dataset.TUDatasetInterface --max-cpus 1 --max-gpus 1 --final-training-runs 1 --result-folder RESULTS/DEBUG

To debug your code it is useful to add --debug to the command above. Notice, however, that the CLI will not work as expected here, as code will be executed sequentially. After debugging, if you need sequential execution, you can use --max-cpus 1 --max-gpus 1 --gpus-per-task [0/1] without the --debug option.

Grid Search 101

Have a look at one of the config files.

Random Search 101

Specify a num_samples in the config file with the number of random trials, replace grid with random, and specify a sampling method for each hyper-parameter. We provide different sampling methods:

  • choice --> pick at random from a list of arguments
  • uniform --> pick uniformly from min and max arguments
  • normal --> sample from normal distribution with mean and std
  • randint --> pick at random from min and max
  • loguniform --> pick following the recprocal distribution from log_min, log_max, with a specified base

There is one config file, namely config_SupToyDGN_RandomSearch.yml, which you can check to see an example.

Data Splits

We provide the data splits taken from

Errica Federico, Podda Marco, Bacciu Davide, Micheli Alessio: A Fair Comparison of Graph Neural Networks for Graph Classification. Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). Code

in the DATA_SPLITS folder.

Credits:

This is a joint project with Marco Podda (Github /Homepage), whom I thank for his relentless dedication.

Many thanks to Antonio Carta (Github/Homepage) for incorporating the Ray library (see v0.4.0) into PyDGN! This will be of tremendous help.

Many thanks to Danilo Numeroso (Github /Homepage) for implementing a very flexible random search! This is a very convenient alternative to grid search.

Contributing

This research software is provided as-is. We are working on this library in our spare time.

If you find a bug, please open an issue to report it, and we will do our best to solve it. For generic/technical questions, please email us rather than opening an issue.

License:

PyDGN is GPL 3.0 licensed, as written in the LICENSE file.

Troubleshooting

As of 15th of August 2021, there is an issue with Pytorch 1.9.0 which impacts the CLI. This is why the setup script installs Pytorch 1.8.1 in the pydgn conda environment until Pytorch 1.10 is released (known to solve the issue).

--

If you get errors like /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found:

  • make sure gcc 5.2.0 is installed: conda install -c anaconda libgcc=5.2.0
  • echo $LD_LIBRARY_PATH should contain :/home/[your user name]/[your anaconda or miniconda folder name]/lib
  • after checking the above points, you can reinstall everything with pip using the --no-cache-dir option
Comments
  • Keep getting raylet error

    Keep getting raylet error

    🔨 Describe the bug

    Hi, I am keep getting raylet error when tryin g to run the example. Is there a way to stop using ray since I am running the experiment on my local computer?

    Thank you!

    (raylet) /home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/autoscaler/_private/cli_logger.py:57: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via pip install 'ray[default]'. Please update your install command. (raylet) warnings.warn( 2022-10-14 13:08:12,974 WARNING worker.py:1189 -- The agent on node EW22-05284 failed with the following error: Traceback (most recent call last): File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 354, in loop.run_until_complete(agent.run()) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 144, in run modules = self._load_modules() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 98, in _load_modules c = cls(self) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/modules/reporter/reporter_agent.py", line 148, in init self._metrics_agent = MetricsAgent(dashboard_agent.metrics_export_port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/metrics_agent.py", line 75, in init prometheus_exporter.new_stats_exporter( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 333, in new_stats_exporter exporter = PrometheusStatsExporter( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 266, in init self.serve_http() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 320, in serve_http start_http_server( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 168, in start_wsgi_server TmpServer.address_family, addr = _get_best_family(addr, port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 157, in _get_best_family infos = socket.getaddrinfo(address, port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/socket.py", line 918, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -2] Name or service not known

    (raylet) Traceback (most recent call last): (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 366, in (raylet) raise e (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 354, in (raylet) loop.run_until_complete(agent.run()) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete (raylet) return future.result() (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 144, in run (raylet) modules = self._load_modules() (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 98, in _load_modules (raylet) c = cls(self) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/modules/reporter/reporter_agent.py", line 148, in init (raylet) self._metrics_agent = MetricsAgent(dashboard_agent.metrics_export_port) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/metrics_agent.py", line 75, in init (raylet) prometheus_exporter.new_stats_exporter( (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 333, in new_stats_exporter (raylet) exporter = PrometheusStatsExporter( (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 266, in init (raylet) self.serve_http() (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 320, in serve_http (raylet) start_http_server( (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 168, in start_wsgi_server (raylet) TmpServer.address_family, addr = _get_best_family(addr, port) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 157, in _get_best_family (raylet) infos = socket.getaddrinfo(address, port) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/socket.py", line 918, in getaddrinfo (raylet) for res in _socket.getaddrinfo(host, port, family, type, proto, flags): (raylet) socket.gaierror: [Errno -2] Name or service not known (raylet) /home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/autoscaler/_private/cli_logger.py:57: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via pip install 'ray[default]'. Please update your install command. (raylet) warnings.warn( 2022-10-14 13:08:14,624 WARNING worker.py:1189 -- The agent on node EW22-05284 failed with the following error: Traceback (most recent call last): File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 354, in loop.run_until_complete(agent.run()) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 144, in run modules = self._load_modules() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 98, in _load_modules c = cls(self) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/modules/reporter/reporter_agent.py", line 148, in init self._metrics_agent = MetricsAgent(dashboard_agent.metrics_export_port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/metrics_agent.py", line 75, in init prometheus_exporter.new_stats_exporter( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 333, in new_stats_exporter exporter = PrometheusStatsExporter( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 266, in init self.serve_http() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 320, in serve_http start_http_server( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 168, in start_wsgi_server TmpServer.address_family, addr = _get_best_family(addr, port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 157, in _get_best_family infos = socket.getaddrinfo(address, port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/socket.py", line 918, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -2] Name or service not known

    bug 
    opened by jwtxwd 4
  • Training engine and returned data list

    Training engine and returned data list

    Feature description

    When shuffling the dataset, the training engine will return the data list shuffled according to the permutation of the data sampler. We should make sure that we reorder the data list.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 2
  • [WIP] feat(plotter): Add W&B Logging

    [WIP] feat(plotter): Add W&B Logging

    This PR proposes to add Weights & Biases logging to the library using helpful tensorboard based utilities. Instead of a separate logger, currently I propose we just monkeypatch and upload the Tensorboard logs to W&B.

    Leaving it as a Draft for now to start a conversation.

    CC: @diningphil @gravins

    opened by SauravMaheshkar 2
  • Fix Ray not deallocating GPU memory

    Fix Ray not deallocating GPU memory

    🔨 Describe the bug

    Some IDLE workers do not release the memory. This is a problem as described (together with potential fix) here: https://docs.ray.io/en/latest/ray-core/tasks/using-ray-with-gpus.html#workers-not-releasing-gpu-resources

    bug 
    opened by diningphil 1
  • Add option to specify subset of GPUs

    Add option to specify subset of GPUs

    Feature description

    In case one wants to force specific GPU IDs to be used, it should be possible to do so when running pydgn-train.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 1
  • Bump aiohttp from 3.7 to 3.7.4

    Bump aiohttp from 3.7 to 3.7.4

    Bumps aiohttp from 3.7 to 3.7.4.

    Release notes

    Sourced from aiohttp's releases.

    aiohttp 3.7.3 release

    Features

    • Use Brotli instead of brotlipy [#3803](https://github.com/aio-libs/aiohttp/issues/3803) <https://github.com/aio-libs/aiohttp/issues/3803>_
    • Made exceptions pickleable. Also changed the repr of some exceptions. [#4077](https://github.com/aio-libs/aiohttp/issues/4077) <https://github.com/aio-libs/aiohttp/issues/4077>_

    Bugfixes

    • Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. [#3532](https://github.com/aio-libs/aiohttp/issues/3532) <https://github.com/aio-libs/aiohttp/issues/3532>_
    • Fix web_middlewares.normalize_path_middleware behavior for patch without slash. [#3669](https://github.com/aio-libs/aiohttp/issues/3669) <https://github.com/aio-libs/aiohttp/issues/3669>_
    • Fix overshadowing of overlapped sub-applications prefixes. [#3701](https://github.com/aio-libs/aiohttp/issues/3701) <https://github.com/aio-libs/aiohttp/issues/3701>_
    • Make BaseConnector.close() a coroutine and wait until the client closes all connections. Drop deprecated "with Connector():" syntax. [#3736](https://github.com/aio-libs/aiohttp/issues/3736) <https://github.com/aio-libs/aiohttp/issues/3736>_
    • Reset the sock_read timeout each time data is received for a aiohttp.client response. [#3808](https://github.com/aio-libs/aiohttp/issues/3808) <https://github.com/aio-libs/aiohttp/issues/3808>_
    • Fixed type annotation for add_view method of UrlDispatcher to accept any subclass of View [#3880](https://github.com/aio-libs/aiohttp/issues/3880) <https://github.com/aio-libs/aiohttp/issues/3880>_
    • Fixed querying the address families from DNS that the current host supports. [#5156](https://github.com/aio-libs/aiohttp/issues/5156) <https://github.com/aio-libs/aiohttp/issues/5156>_
    • Change return type of MultipartReader.aiter() and BodyPartReader.aiter() to AsyncIterator. [#5163](https://github.com/aio-libs/aiohttp/issues/5163) <https://github.com/aio-libs/aiohttp/issues/5163>_
    • Provide x86 Windows wheels. [#5230](https://github.com/aio-libs/aiohttp/issues/5230) <https://github.com/aio-libs/aiohttp/issues/5230>_

    Improved Documentation

    • Add documentation for aiohttp.web.FileResponse. [#3958](https://github.com/aio-libs/aiohttp/issues/3958) <https://github.com/aio-libs/aiohttp/issues/3958>_
    • Removed deprecation warning in tracing example docs [#3964](https://github.com/aio-libs/aiohttp/issues/3964) <https://github.com/aio-libs/aiohttp/issues/3964>_
    • Fixed wrong "Usage" docstring of aiohttp.client.request. [#4603](https://github.com/aio-libs/aiohttp/issues/4603) <https://github.com/aio-libs/aiohttp/issues/4603>_
    • Add aiohttp-pydantic to third party libraries [#5228](https://github.com/aio-libs/aiohttp/issues/5228) <https://github.com/aio-libs/aiohttp/issues/5228>_

    Misc

    ... (truncated)

    Changelog

    Sourced from aiohttp's changelog.

    3.7.4 (2021-02-25)

    Bugfixes

    • (SECURITY BUG) Started preventing open redirects in the aiohttp.web.normalize_path_middleware middleware. For more details, see https://github.com/aio-libs/aiohttp/security/advisories/GHSA-v6wp-4m6f-gcjg.

      Thanks to Beast Glatisant <https://github.com/g147>__ for finding the first instance of this issue and Jelmer Vernooij <https://jelmer.uk/>__ for reporting and tracking it down in aiohttp. [#5497](https://github.com/aio-libs/aiohttp/issues/5497) <https://github.com/aio-libs/aiohttp/issues/5497>_

    • Fix interpretation difference of the pure-Python and the Cython-based HTTP parsers construct a yarl.URL object for HTTP request-target.

      Before this fix, the Python parser would turn the URI's absolute-path for //some-path into / while the Cython code preserved it as //some-path. Now, both do the latter. [#5498](https://github.com/aio-libs/aiohttp/issues/5498) <https://github.com/aio-libs/aiohttp/issues/5498>_


    3.7.3 (2020-11-18)

    Features

    • Use Brotli instead of brotlipy [#3803](https://github.com/aio-libs/aiohttp/issues/3803) <https://github.com/aio-libs/aiohttp/issues/3803>_
    • Made exceptions pickleable. Also changed the repr of some exceptions. [#4077](https://github.com/aio-libs/aiohttp/issues/4077) <https://github.com/aio-libs/aiohttp/issues/4077>_

    Bugfixes

    • Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. [#3532](https://github.com/aio-libs/aiohttp/issues/3532) <https://github.com/aio-libs/aiohttp/issues/3532>_
    • Fix web_middlewares.normalize_path_middleware behavior for patch without slash. [#3669](https://github.com/aio-libs/aiohttp/issues/3669) <https://github.com/aio-libs/aiohttp/issues/3669>_
    • Fix overshadowing of overlapped sub-applications prefixes. [#3701](https://github.com/aio-libs/aiohttp/issues/3701) <https://github.com/aio-libs/aiohttp/issues/3701>_

    ... (truncated)

    Commits
    • 0a26acc Bump aiohttp to v3.7.4 for a security release
    • 021c416 Merge branch 'ghsa-v6wp-4m6f-gcjg' into master
    • 4ed7c25 Bump chardet from 3.0.4 to 4.0.0 (#5333)
    • b61f0fd Fix how pure-Python HTTP parser interprets //
    • 5c1efbc Bump pre-commit from 2.9.2 to 2.9.3 (#5322)
    • 0075075 Bump pygments from 2.7.2 to 2.7.3 (#5318)
    • 5085173 Bump multidict from 5.0.2 to 5.1.0 (#5308)
    • 5d1a75e Bump pre-commit from 2.9.0 to 2.9.2 (#5290)
    • 6724d0e Bump pre-commit from 2.8.2 to 2.9.0 (#5273)
    • c688451 Removed duplicate timeout parameter in ClientSession reference docs. (#5262) ...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump aiohttp from 3.7 to 3.7.4 in /.github

    Bump aiohttp from 3.7 to 3.7.4 in /.github

    Bumps aiohttp from 3.7 to 3.7.4.

    Release notes

    Sourced from aiohttp's releases.

    aiohttp 3.7.3 release

    Features

    • Use Brotli instead of brotlipy [#3803](https://github.com/aio-libs/aiohttp/issues/3803) <https://github.com/aio-libs/aiohttp/issues/3803>_
    • Made exceptions pickleable. Also changed the repr of some exceptions. [#4077](https://github.com/aio-libs/aiohttp/issues/4077) <https://github.com/aio-libs/aiohttp/issues/4077>_

    Bugfixes

    • Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. [#3532](https://github.com/aio-libs/aiohttp/issues/3532) <https://github.com/aio-libs/aiohttp/issues/3532>_
    • Fix web_middlewares.normalize_path_middleware behavior for patch without slash. [#3669](https://github.com/aio-libs/aiohttp/issues/3669) <https://github.com/aio-libs/aiohttp/issues/3669>_
    • Fix overshadowing of overlapped sub-applications prefixes. [#3701](https://github.com/aio-libs/aiohttp/issues/3701) <https://github.com/aio-libs/aiohttp/issues/3701>_
    • Make BaseConnector.close() a coroutine and wait until the client closes all connections. Drop deprecated "with Connector():" syntax. [#3736](https://github.com/aio-libs/aiohttp/issues/3736) <https://github.com/aio-libs/aiohttp/issues/3736>_
    • Reset the sock_read timeout each time data is received for a aiohttp.client response. [#3808](https://github.com/aio-libs/aiohttp/issues/3808) <https://github.com/aio-libs/aiohttp/issues/3808>_
    • Fixed type annotation for add_view method of UrlDispatcher to accept any subclass of View [#3880](https://github.com/aio-libs/aiohttp/issues/3880) <https://github.com/aio-libs/aiohttp/issues/3880>_
    • Fixed querying the address families from DNS that the current host supports. [#5156](https://github.com/aio-libs/aiohttp/issues/5156) <https://github.com/aio-libs/aiohttp/issues/5156>_
    • Change return type of MultipartReader.aiter() and BodyPartReader.aiter() to AsyncIterator. [#5163](https://github.com/aio-libs/aiohttp/issues/5163) <https://github.com/aio-libs/aiohttp/issues/5163>_
    • Provide x86 Windows wheels. [#5230](https://github.com/aio-libs/aiohttp/issues/5230) <https://github.com/aio-libs/aiohttp/issues/5230>_

    Improved Documentation

    • Add documentation for aiohttp.web.FileResponse. [#3958](https://github.com/aio-libs/aiohttp/issues/3958) <https://github.com/aio-libs/aiohttp/issues/3958>_
    • Removed deprecation warning in tracing example docs [#3964](https://github.com/aio-libs/aiohttp/issues/3964) <https://github.com/aio-libs/aiohttp/issues/3964>_
    • Fixed wrong "Usage" docstring of aiohttp.client.request. [#4603](https://github.com/aio-libs/aiohttp/issues/4603) <https://github.com/aio-libs/aiohttp/issues/4603>_
    • Add aiohttp-pydantic to third party libraries [#5228](https://github.com/aio-libs/aiohttp/issues/5228) <https://github.com/aio-libs/aiohttp/issues/5228>_

    Misc

    ... (truncated)

    Changelog

    Sourced from aiohttp's changelog.

    3.7.4 (2021-02-25)

    Bugfixes

    • (SECURITY BUG) Started preventing open redirects in the aiohttp.web.normalize_path_middleware middleware. For more details, see https://github.com/aio-libs/aiohttp/security/advisories/GHSA-v6wp-4m6f-gcjg.

      Thanks to Beast Glatisant <https://github.com/g147>__ for finding the first instance of this issue and Jelmer Vernooij <https://jelmer.uk/>__ for reporting and tracking it down in aiohttp. [#5497](https://github.com/aio-libs/aiohttp/issues/5497) <https://github.com/aio-libs/aiohttp/issues/5497>_

    • Fix interpretation difference of the pure-Python and the Cython-based HTTP parsers construct a yarl.URL object for HTTP request-target.

      Before this fix, the Python parser would turn the URI's absolute-path for //some-path into / while the Cython code preserved it as //some-path. Now, both do the latter. [#5498](https://github.com/aio-libs/aiohttp/issues/5498) <https://github.com/aio-libs/aiohttp/issues/5498>_


    3.7.3 (2020-11-18)

    Features

    • Use Brotli instead of brotlipy [#3803](https://github.com/aio-libs/aiohttp/issues/3803) <https://github.com/aio-libs/aiohttp/issues/3803>_
    • Made exceptions pickleable. Also changed the repr of some exceptions. [#4077](https://github.com/aio-libs/aiohttp/issues/4077) <https://github.com/aio-libs/aiohttp/issues/4077>_

    Bugfixes

    • Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. [#3532](https://github.com/aio-libs/aiohttp/issues/3532) <https://github.com/aio-libs/aiohttp/issues/3532>_
    • Fix web_middlewares.normalize_path_middleware behavior for patch without slash. [#3669](https://github.com/aio-libs/aiohttp/issues/3669) <https://github.com/aio-libs/aiohttp/issues/3669>_
    • Fix overshadowing of overlapped sub-applications prefixes. [#3701](https://github.com/aio-libs/aiohttp/issues/3701) <https://github.com/aio-libs/aiohttp/issues/3701>_

    ... (truncated)

    Commits
    • 0a26acc Bump aiohttp to v3.7.4 for a security release
    • 021c416 Merge branch 'ghsa-v6wp-4m6f-gcjg' into master
    • 4ed7c25 Bump chardet from 3.0.4 to 4.0.0 (#5333)
    • b61f0fd Fix how pure-Python HTTP parser interprets //
    • 5c1efbc Bump pre-commit from 2.9.2 to 2.9.3 (#5322)
    • 0075075 Bump pygments from 2.7.2 to 2.7.3 (#5318)
    • 5085173 Bump multidict from 5.0.2 to 5.1.0 (#5308)
    • 5d1a75e Bump pre-commit from 2.9.0 to 2.9.2 (#5290)
    • 6724d0e Bump pre-commit from 2.8.2 to 2.9.0 (#5273)
    • c688451 Removed duplicate timeout parameter in ClientSession reference docs. (#5262) ...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Weighted Additive Loss

    Weighted Additive Loss

    Feature description

    Improve AdditiveLoss by adding the possibility of weighting the different losses.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 1
  • Telegram Bot support

    Telegram Bot support

    Feature description

    Add telegram bot support to send messages whenever an experiment breaks suddenly or the entire set of experiments (with the chance to have granularity) has finished

    Ideas on how to do it

    Once you create a bot, it should be easy to send a message to a particular chat.

    Additional info

    No response

    opened by diningphil 1
  • Accumulate predictions for metric that require global statistics

    Accumulate predictions for metric that require global statistics

    Feature description

    Metrics like AP require that all the samples of the train/valt/test dataset are taken into account when computing a score. We should add an option to allow, with greater usage of memory, to accumulate all predictions and target values until the end of an epoch, and subsequently compute the epoch score.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 1
  • Metric improvement

    Metric improvement

    Feature description

    Metric should always use the result from the _handle_reduction function, even when accumulating the number of samples.

    This is because _handle_reduction may do something more complicated in some metrics that override the function.

    Be careful, however, that this works for both "mean" and "sum" reductions.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 1
  • Support for Single-experiment/Multi-GPU

    Support for Single-experiment/Multi-GPU

    Feature description

    Allow an experiment to run on multiple GPUs.

    Ideas on how to do it

    No response

    Additional info

    Remember to set the appropriate seed on all GPUs using torch.cuda.manual_seed_all

    opened by diningphil 1
Releases(v1.3.0.post2)
Owner
Federico Errica
Teaching Machines
Federico Errica
Official pytorch implementation of paper "Image-to-image Translation via Hierarchical Style Disentanglement".

HiSD: Image-to-image Translation via Hierarchical Style Disentanglement Official pytorch implementation of paper "Image-to-image Translation

364 Dec 14, 2022
Segmentation and Identification of Vertebrae in CT Scans using CNN, k-means Clustering and k-NN

Segmentation and Identification of Vertebrae in CT Scans using CNN, k-means Clustering and k-NN If you use this code for your research, please cite ou

41 Dec 08, 2022
Facial detection, landmark tracking and expression transfer library for Windows, Linux and Mac

Welcome to the CSIRO Face Analysis SDK. Documentation for the SDK can be found in doc/documentation.html. All code in this SDK is provided according t

Luiz Carlos Vieira 7 Jul 16, 2020
Python library for loading and using triangular meshes.

Trimesh is a pure Python (2.7-3.4+) library for loading and using triangular meshes with an emphasis on watertight surfaces. The goal of the library i

Michael Dawson-Haggerty 2.2k Jan 07, 2023
Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment"

DSN-IQA Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment" Requirements Python =3.8.0 Pytorch =1.7.1 Usage wit

7 Oct 13, 2022
This repository contains the needed resources to build the HIRID-ICU-Benchmark dataset

HiRID-ICU-Benchmark This repository contains the needed resources to build the HIRID-ICU-Benchmark dataset for which the manuscript can be found here.

Biomedical Informatics at ETH Zurich 30 Dec 16, 2022
Complex Answer Generation For Conversational Search Systems.

Complex Answer Generation For Conversational Search Systems. Code for Does Structure Matter? Leveraging Data-to-Text Generation for Answering Complex

Hanane Djeddal 0 Dec 06, 2021
A trusty face recognition research platform developed by Tencent Youtu Lab

Introduction TFace: A trusty face recognition research platform developed by Tencent Youtu Lab. It provides a high-performance distributed training fr

Tencent 956 Jan 01, 2023
Get 2D point positions (e.g., facial landmarks) projected on 3D mesh

points2d_projection_mesh Input 2D points (e.g. facial landmarks) on an image Camera parameters (extrinsic and intrinsic) of the image Aligned 3D mesh

5 Dec 08, 2022
AVD Quickstart Containerlab

AVD Quickstart Containerlab WARNING This repository is still under construction. It's fully functional, but has number of limitations. For example: RE

Carl Buchmann 3 Apr 10, 2022
Compare neural networks by their feature similarity

PyTorch Model Compare A tiny package to compare two neural networks in PyTorch. There are many ways to compare two neural networks, but one robust and

Anand Krishnamoorthy 181 Jan 04, 2023
Transformer in Vision

Transformer-in-Vision Recent Transformer-based CV and related works. Welcome to comment/contribute! Keep updated. Resource SCENIC: A JAX Library for C

Yong-Lu Li 1.1k Dec 30, 2022
MobileNetV1-V2,MobileNeXt,GhostNet,AdderNet,ShuffleNetV1-V2,Mobile+ViT etc.

MobileNetV1-V2,MobileNeXt,GhostNet,AdderNet,ShuffleNetV1-V2,Mobile+ViT etc. ⭐⭐⭐⭐⭐

568 Jan 04, 2023
TensorFlow implementation of "Learning from Simulated and Unsupervised Images through Adversarial Training"

Simulated+Unsupervised (S+U) Learning in TensorFlow TensorFlow implementation of Learning from Simulated and Unsupervised Images through Adversarial T

Taehoon Kim 569 Dec 29, 2022
A Temporal Extension Library for PyTorch Geometric

Documentation | External Resources | Datasets PyTorch Geometric Temporal is a temporal (dynamic) extension library for PyTorch Geometric. The library

Benedek Rozemberczki 1.9k Jan 07, 2023
SAAVN - Sound Adversarial Audio-Visual Navigation,ICLR2022 (In PyTorch)

SAAVN SAAVN Code release for paper "Sound Adversarial Audio-Visual Navigation,IC

YinfengYu 10 Aug 30, 2022
Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency

Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency This is a official implementation of the CycleContrast introduced in

13 Nov 14, 2022
Physics-Aware Training (PAT) is a method to train real physical systems with backpropagation.

Physics-Aware Training (PAT) is a method to train real physical systems with backpropagation. It was introduced in Wright, Logan G. & Onodera, Tatsuhiro et al. (2021)1 to train Physical Neural Networ

McMahon Lab 230 Jan 05, 2023
Kaggle competition: Springleaf Marketing Response

PruebaEnel Prueba Kaggle-Springleaf-master Prueba Kaggle-Springleaf Kaggle competition: Springleaf Marketing Response Competencia de Kaggle: Marketing

1 Feb 09, 2022
This repo contains the official implementations of EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis This repo contains the official implementations of EigenDamage: Structured Prunin

Chaoqi Wang 107 Apr 20, 2022