A Python library for Deep Graph Networks

Related tags

Deep LearningPyDGN
Overview

PyDGN

Wiki

Description

This is a Python library to easily experiment with Deep Graph Networks (DGNs). It provides automatic management of data splitting, loading and the most common experimental settings. It also handles both model selection and risk assessment procedures, by trying many different configurations in parallel (CPU or GPU). This repository is built upon the Pytorch Geometric Library, which provides support for data management.

If you happen to use or modify this code, please remember to cite our tutorial paper:

Bacciu Davide, Errica Federico, Micheli Alessio, Podda Marco: A Gentle Introduction to Deep Learning for Graphs, Neural Networks, 2020. DOI: 10.1016/j.neunet.2020.06.006.

If you are interested in a rigorous evaluation of Deep Graph Networks, check this out:

Errica Federico, Podda Marco, Bacciu Davide, Micheli Alessio: A Fair Comparison of Graph Neural Networks for Graph Classification. Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). Code

Installation:

(We assume git and Miniconda/Anaconda are installed)

First, make sure gcc 5.2.0 is installed: conda install -c anaconda libgcc=5.2.0. Then, echo $LD_LIBRARY_PATH should always contain :/home/[your user name]/miniconda3/lib. Then run from your terminal the following command:

source setup/install.sh [<your_cuda_version>]
pip install pydgn

Where <your_cuda_version> is an optional argument that can be either cpu, cu102 or cu111 for Pytorch >= 1.8.0. If you do not provide a cuda version, the script will default to cpu. The script will create a virtual environment named pydgn, with all the required packages needed to run our code. Important: do NOT run this command using bash instead of source!

Remember that PyTorch MacOS Binaries dont support CUDA, install from source if CUDA is needed

Usage:

Preprocess your dataset (see also Wiki)

python build_dataset.py --config-file [your data config file]

Exampla

python build_dataset.py --config-file DATA_CONFIGS/config_PROTEINS.yml 

Launch an experiment in debug mode (see also Wiki)

python launch_experiment.py --config-file [your exp. config file] --splits-folder [the splits MAIN folder] --data-splits [the splits file] --data-root [root folder of your data] --dataset-name [name of the dataset] --dataset-class [class that handles the dataset] --max-cpus [max cpu parallelism] --max-gpus [max gpu parallelism] --gpus-per-task [how many gpus to allocate for each job] --final-training-runs [how many final runs when evaluating on test. Results are averaged] --result-folder [folder where to store results]

Example (GPU required)

python launch_experiment.py --config-file MODEL_CONFIGS/config_SupToyDGN_RandomSearch.yml --splits-folder DATA_SPLITS/CHEMICAL/ --data-splits DATA_SPLITS/CHEMICAL/PROTEINS/PROTEINS_outer10_inner1.splits --data-root DATA --dataset-name PROTEINS --dataset-class pydgn.data.dataset.TUDatasetInterface --max-cpus 1 --max-gpus 1 --final-training-runs 1 --result-folder RESULTS/DEBUG

To debug your code it is useful to add --debug to the command above. Notice, however, that the CLI will not work as expected here, as code will be executed sequentially. After debugging, if you need sequential execution, you can use --max-cpus 1 --max-gpus 1 --gpus-per-task [0/1] without the --debug option.

Grid Search 101

Have a look at one of the config files.

Random Search 101

Specify a num_samples in the config file with the number of random trials, replace grid with random, and specify a sampling method for each hyper-parameter. We provide different sampling methods:

  • choice --> pick at random from a list of arguments
  • uniform --> pick uniformly from min and max arguments
  • normal --> sample from normal distribution with mean and std
  • randint --> pick at random from min and max
  • loguniform --> pick following the recprocal distribution from log_min, log_max, with a specified base

There is one config file, namely config_SupToyDGN_RandomSearch.yml, which you can check to see an example.

Data Splits

We provide the data splits taken from

Errica Federico, Podda Marco, Bacciu Davide, Micheli Alessio: A Fair Comparison of Graph Neural Networks for Graph Classification. Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). Code

in the DATA_SPLITS folder.

Credits:

This is a joint project with Marco Podda (Github /Homepage), whom I thank for his relentless dedication.

Many thanks to Antonio Carta (Github/Homepage) for incorporating the Ray library (see v0.4.0) into PyDGN! This will be of tremendous help.

Many thanks to Danilo Numeroso (Github /Homepage) for implementing a very flexible random search! This is a very convenient alternative to grid search.

Contributing

This research software is provided as-is. We are working on this library in our spare time.

If you find a bug, please open an issue to report it, and we will do our best to solve it. For generic/technical questions, please email us rather than opening an issue.

License:

PyDGN is GPL 3.0 licensed, as written in the LICENSE file.

Troubleshooting

As of 15th of August 2021, there is an issue with Pytorch 1.9.0 which impacts the CLI. This is why the setup script installs Pytorch 1.8.1 in the pydgn conda environment until Pytorch 1.10 is released (known to solve the issue).

--

If you get errors like /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found:

  • make sure gcc 5.2.0 is installed: conda install -c anaconda libgcc=5.2.0
  • echo $LD_LIBRARY_PATH should contain :/home/[your user name]/[your anaconda or miniconda folder name]/lib
  • after checking the above points, you can reinstall everything with pip using the --no-cache-dir option
Comments
  • Keep getting raylet error

    Keep getting raylet error

    🔨 Describe the bug

    Hi, I am keep getting raylet error when tryin g to run the example. Is there a way to stop using ray since I am running the experiment on my local computer?

    Thank you!

    (raylet) /home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/autoscaler/_private/cli_logger.py:57: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via pip install 'ray[default]'. Please update your install command. (raylet) warnings.warn( 2022-10-14 13:08:12,974 WARNING worker.py:1189 -- The agent on node EW22-05284 failed with the following error: Traceback (most recent call last): File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 354, in loop.run_until_complete(agent.run()) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 144, in run modules = self._load_modules() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 98, in _load_modules c = cls(self) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/modules/reporter/reporter_agent.py", line 148, in init self._metrics_agent = MetricsAgent(dashboard_agent.metrics_export_port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/metrics_agent.py", line 75, in init prometheus_exporter.new_stats_exporter( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 333, in new_stats_exporter exporter = PrometheusStatsExporter( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 266, in init self.serve_http() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 320, in serve_http start_http_server( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 168, in start_wsgi_server TmpServer.address_family, addr = _get_best_family(addr, port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 157, in _get_best_family infos = socket.getaddrinfo(address, port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/socket.py", line 918, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -2] Name or service not known

    (raylet) Traceback (most recent call last): (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 366, in (raylet) raise e (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 354, in (raylet) loop.run_until_complete(agent.run()) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete (raylet) return future.result() (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 144, in run (raylet) modules = self._load_modules() (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 98, in _load_modules (raylet) c = cls(self) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/modules/reporter/reporter_agent.py", line 148, in init (raylet) self._metrics_agent = MetricsAgent(dashboard_agent.metrics_export_port) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/metrics_agent.py", line 75, in init (raylet) prometheus_exporter.new_stats_exporter( (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 333, in new_stats_exporter (raylet) exporter = PrometheusStatsExporter( (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 266, in init (raylet) self.serve_http() (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 320, in serve_http (raylet) start_http_server( (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 168, in start_wsgi_server (raylet) TmpServer.address_family, addr = _get_best_family(addr, port) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 157, in _get_best_family (raylet) infos = socket.getaddrinfo(address, port) (raylet) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/socket.py", line 918, in getaddrinfo (raylet) for res in _socket.getaddrinfo(host, port, family, type, proto, flags): (raylet) socket.gaierror: [Errno -2] Name or service not known (raylet) /home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/autoscaler/_private/cli_logger.py:57: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via pip install 'ray[default]'. Please update your install command. (raylet) warnings.warn( 2022-10-14 13:08:14,624 WARNING worker.py:1189 -- The agent on node EW22-05284 failed with the following error: Traceback (most recent call last): File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 354, in loop.run_until_complete(agent.run()) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 144, in run modules = self._load_modules() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/agent.py", line 98, in _load_modules c = cls(self) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/new_dashboard/modules/reporter/reporter_agent.py", line 148, in init self._metrics_agent = MetricsAgent(dashboard_agent.metrics_export_port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/metrics_agent.py", line 75, in init prometheus_exporter.new_stats_exporter( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 333, in new_stats_exporter exporter = PrometheusStatsExporter( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 266, in init self.serve_http() File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/ray/_private/prometheus_exporter.py", line 320, in serve_http start_http_server( File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 168, in start_wsgi_server TmpServer.address_family, addr = _get_best_family(addr, port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/site-packages/prometheus_client/exposition.py", line 157, in _get_best_family infos = socket.getaddrinfo(address, port) File "/home/jwtxwd/anaconda3/envs/pydgn/lib/python3.8/socket.py", line 918, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -2] Name or service not known

    bug 
    opened by jwtxwd 4
  • Training engine and returned data list

    Training engine and returned data list

    Feature description

    When shuffling the dataset, the training engine will return the data list shuffled according to the permutation of the data sampler. We should make sure that we reorder the data list.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 2
  • [WIP] feat(plotter): Add W&B Logging

    [WIP] feat(plotter): Add W&B Logging

    This PR proposes to add Weights & Biases logging to the library using helpful tensorboard based utilities. Instead of a separate logger, currently I propose we just monkeypatch and upload the Tensorboard logs to W&B.

    Leaving it as a Draft for now to start a conversation.

    CC: @diningphil @gravins

    opened by SauravMaheshkar 2
  • Fix Ray not deallocating GPU memory

    Fix Ray not deallocating GPU memory

    🔨 Describe the bug

    Some IDLE workers do not release the memory. This is a problem as described (together with potential fix) here: https://docs.ray.io/en/latest/ray-core/tasks/using-ray-with-gpus.html#workers-not-releasing-gpu-resources

    bug 
    opened by diningphil 1
  • Add option to specify subset of GPUs

    Add option to specify subset of GPUs

    Feature description

    In case one wants to force specific GPU IDs to be used, it should be possible to do so when running pydgn-train.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 1
  • Bump aiohttp from 3.7 to 3.7.4

    Bump aiohttp from 3.7 to 3.7.4

    Bumps aiohttp from 3.7 to 3.7.4.

    Release notes

    Sourced from aiohttp's releases.

    aiohttp 3.7.3 release

    Features

    • Use Brotli instead of brotlipy [#3803](https://github.com/aio-libs/aiohttp/issues/3803) <https://github.com/aio-libs/aiohttp/issues/3803>_
    • Made exceptions pickleable. Also changed the repr of some exceptions. [#4077](https://github.com/aio-libs/aiohttp/issues/4077) <https://github.com/aio-libs/aiohttp/issues/4077>_

    Bugfixes

    • Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. [#3532](https://github.com/aio-libs/aiohttp/issues/3532) <https://github.com/aio-libs/aiohttp/issues/3532>_
    • Fix web_middlewares.normalize_path_middleware behavior for patch without slash. [#3669](https://github.com/aio-libs/aiohttp/issues/3669) <https://github.com/aio-libs/aiohttp/issues/3669>_
    • Fix overshadowing of overlapped sub-applications prefixes. [#3701](https://github.com/aio-libs/aiohttp/issues/3701) <https://github.com/aio-libs/aiohttp/issues/3701>_
    • Make BaseConnector.close() a coroutine and wait until the client closes all connections. Drop deprecated "with Connector():" syntax. [#3736](https://github.com/aio-libs/aiohttp/issues/3736) <https://github.com/aio-libs/aiohttp/issues/3736>_
    • Reset the sock_read timeout each time data is received for a aiohttp.client response. [#3808](https://github.com/aio-libs/aiohttp/issues/3808) <https://github.com/aio-libs/aiohttp/issues/3808>_
    • Fixed type annotation for add_view method of UrlDispatcher to accept any subclass of View [#3880](https://github.com/aio-libs/aiohttp/issues/3880) <https://github.com/aio-libs/aiohttp/issues/3880>_
    • Fixed querying the address families from DNS that the current host supports. [#5156](https://github.com/aio-libs/aiohttp/issues/5156) <https://github.com/aio-libs/aiohttp/issues/5156>_
    • Change return type of MultipartReader.aiter() and BodyPartReader.aiter() to AsyncIterator. [#5163](https://github.com/aio-libs/aiohttp/issues/5163) <https://github.com/aio-libs/aiohttp/issues/5163>_
    • Provide x86 Windows wheels. [#5230](https://github.com/aio-libs/aiohttp/issues/5230) <https://github.com/aio-libs/aiohttp/issues/5230>_

    Improved Documentation

    • Add documentation for aiohttp.web.FileResponse. [#3958](https://github.com/aio-libs/aiohttp/issues/3958) <https://github.com/aio-libs/aiohttp/issues/3958>_
    • Removed deprecation warning in tracing example docs [#3964](https://github.com/aio-libs/aiohttp/issues/3964) <https://github.com/aio-libs/aiohttp/issues/3964>_
    • Fixed wrong "Usage" docstring of aiohttp.client.request. [#4603](https://github.com/aio-libs/aiohttp/issues/4603) <https://github.com/aio-libs/aiohttp/issues/4603>_
    • Add aiohttp-pydantic to third party libraries [#5228](https://github.com/aio-libs/aiohttp/issues/5228) <https://github.com/aio-libs/aiohttp/issues/5228>_

    Misc

    ... (truncated)

    Changelog

    Sourced from aiohttp's changelog.

    3.7.4 (2021-02-25)

    Bugfixes

    • (SECURITY BUG) Started preventing open redirects in the aiohttp.web.normalize_path_middleware middleware. For more details, see https://github.com/aio-libs/aiohttp/security/advisories/GHSA-v6wp-4m6f-gcjg.

      Thanks to Beast Glatisant <https://github.com/g147>__ for finding the first instance of this issue and Jelmer Vernooij <https://jelmer.uk/>__ for reporting and tracking it down in aiohttp. [#5497](https://github.com/aio-libs/aiohttp/issues/5497) <https://github.com/aio-libs/aiohttp/issues/5497>_

    • Fix interpretation difference of the pure-Python and the Cython-based HTTP parsers construct a yarl.URL object for HTTP request-target.

      Before this fix, the Python parser would turn the URI's absolute-path for //some-path into / while the Cython code preserved it as //some-path. Now, both do the latter. [#5498](https://github.com/aio-libs/aiohttp/issues/5498) <https://github.com/aio-libs/aiohttp/issues/5498>_


    3.7.3 (2020-11-18)

    Features

    • Use Brotli instead of brotlipy [#3803](https://github.com/aio-libs/aiohttp/issues/3803) <https://github.com/aio-libs/aiohttp/issues/3803>_
    • Made exceptions pickleable. Also changed the repr of some exceptions. [#4077](https://github.com/aio-libs/aiohttp/issues/4077) <https://github.com/aio-libs/aiohttp/issues/4077>_

    Bugfixes

    • Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. [#3532](https://github.com/aio-libs/aiohttp/issues/3532) <https://github.com/aio-libs/aiohttp/issues/3532>_
    • Fix web_middlewares.normalize_path_middleware behavior for patch without slash. [#3669](https://github.com/aio-libs/aiohttp/issues/3669) <https://github.com/aio-libs/aiohttp/issues/3669>_
    • Fix overshadowing of overlapped sub-applications prefixes. [#3701](https://github.com/aio-libs/aiohttp/issues/3701) <https://github.com/aio-libs/aiohttp/issues/3701>_

    ... (truncated)

    Commits
    • 0a26acc Bump aiohttp to v3.7.4 for a security release
    • 021c416 Merge branch 'ghsa-v6wp-4m6f-gcjg' into master
    • 4ed7c25 Bump chardet from 3.0.4 to 4.0.0 (#5333)
    • b61f0fd Fix how pure-Python HTTP parser interprets //
    • 5c1efbc Bump pre-commit from 2.9.2 to 2.9.3 (#5322)
    • 0075075 Bump pygments from 2.7.2 to 2.7.3 (#5318)
    • 5085173 Bump multidict from 5.0.2 to 5.1.0 (#5308)
    • 5d1a75e Bump pre-commit from 2.9.0 to 2.9.2 (#5290)
    • 6724d0e Bump pre-commit from 2.8.2 to 2.9.0 (#5273)
    • c688451 Removed duplicate timeout parameter in ClientSession reference docs. (#5262) ...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Bump aiohttp from 3.7 to 3.7.4 in /.github

    Bump aiohttp from 3.7 to 3.7.4 in /.github

    Bumps aiohttp from 3.7 to 3.7.4.

    Release notes

    Sourced from aiohttp's releases.

    aiohttp 3.7.3 release

    Features

    • Use Brotli instead of brotlipy [#3803](https://github.com/aio-libs/aiohttp/issues/3803) <https://github.com/aio-libs/aiohttp/issues/3803>_
    • Made exceptions pickleable. Also changed the repr of some exceptions. [#4077](https://github.com/aio-libs/aiohttp/issues/4077) <https://github.com/aio-libs/aiohttp/issues/4077>_

    Bugfixes

    • Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. [#3532](https://github.com/aio-libs/aiohttp/issues/3532) <https://github.com/aio-libs/aiohttp/issues/3532>_
    • Fix web_middlewares.normalize_path_middleware behavior for patch without slash. [#3669](https://github.com/aio-libs/aiohttp/issues/3669) <https://github.com/aio-libs/aiohttp/issues/3669>_
    • Fix overshadowing of overlapped sub-applications prefixes. [#3701](https://github.com/aio-libs/aiohttp/issues/3701) <https://github.com/aio-libs/aiohttp/issues/3701>_
    • Make BaseConnector.close() a coroutine and wait until the client closes all connections. Drop deprecated "with Connector():" syntax. [#3736](https://github.com/aio-libs/aiohttp/issues/3736) <https://github.com/aio-libs/aiohttp/issues/3736>_
    • Reset the sock_read timeout each time data is received for a aiohttp.client response. [#3808](https://github.com/aio-libs/aiohttp/issues/3808) <https://github.com/aio-libs/aiohttp/issues/3808>_
    • Fixed type annotation for add_view method of UrlDispatcher to accept any subclass of View [#3880](https://github.com/aio-libs/aiohttp/issues/3880) <https://github.com/aio-libs/aiohttp/issues/3880>_
    • Fixed querying the address families from DNS that the current host supports. [#5156](https://github.com/aio-libs/aiohttp/issues/5156) <https://github.com/aio-libs/aiohttp/issues/5156>_
    • Change return type of MultipartReader.aiter() and BodyPartReader.aiter() to AsyncIterator. [#5163](https://github.com/aio-libs/aiohttp/issues/5163) <https://github.com/aio-libs/aiohttp/issues/5163>_
    • Provide x86 Windows wheels. [#5230](https://github.com/aio-libs/aiohttp/issues/5230) <https://github.com/aio-libs/aiohttp/issues/5230>_

    Improved Documentation

    • Add documentation for aiohttp.web.FileResponse. [#3958](https://github.com/aio-libs/aiohttp/issues/3958) <https://github.com/aio-libs/aiohttp/issues/3958>_
    • Removed deprecation warning in tracing example docs [#3964](https://github.com/aio-libs/aiohttp/issues/3964) <https://github.com/aio-libs/aiohttp/issues/3964>_
    • Fixed wrong "Usage" docstring of aiohttp.client.request. [#4603](https://github.com/aio-libs/aiohttp/issues/4603) <https://github.com/aio-libs/aiohttp/issues/4603>_
    • Add aiohttp-pydantic to third party libraries [#5228](https://github.com/aio-libs/aiohttp/issues/5228) <https://github.com/aio-libs/aiohttp/issues/5228>_

    Misc

    ... (truncated)

    Changelog

    Sourced from aiohttp's changelog.

    3.7.4 (2021-02-25)

    Bugfixes

    • (SECURITY BUG) Started preventing open redirects in the aiohttp.web.normalize_path_middleware middleware. For more details, see https://github.com/aio-libs/aiohttp/security/advisories/GHSA-v6wp-4m6f-gcjg.

      Thanks to Beast Glatisant <https://github.com/g147>__ for finding the first instance of this issue and Jelmer Vernooij <https://jelmer.uk/>__ for reporting and tracking it down in aiohttp. [#5497](https://github.com/aio-libs/aiohttp/issues/5497) <https://github.com/aio-libs/aiohttp/issues/5497>_

    • Fix interpretation difference of the pure-Python and the Cython-based HTTP parsers construct a yarl.URL object for HTTP request-target.

      Before this fix, the Python parser would turn the URI's absolute-path for //some-path into / while the Cython code preserved it as //some-path. Now, both do the latter. [#5498](https://github.com/aio-libs/aiohttp/issues/5498) <https://github.com/aio-libs/aiohttp/issues/5498>_


    3.7.3 (2020-11-18)

    Features

    • Use Brotli instead of brotlipy [#3803](https://github.com/aio-libs/aiohttp/issues/3803) <https://github.com/aio-libs/aiohttp/issues/3803>_
    • Made exceptions pickleable. Also changed the repr of some exceptions. [#4077](https://github.com/aio-libs/aiohttp/issues/4077) <https://github.com/aio-libs/aiohttp/issues/4077>_

    Bugfixes

    • Raise a ClientResponseError instead of an AssertionError for a blank HTTP Reason Phrase. [#3532](https://github.com/aio-libs/aiohttp/issues/3532) <https://github.com/aio-libs/aiohttp/issues/3532>_
    • Fix web_middlewares.normalize_path_middleware behavior for patch without slash. [#3669](https://github.com/aio-libs/aiohttp/issues/3669) <https://github.com/aio-libs/aiohttp/issues/3669>_
    • Fix overshadowing of overlapped sub-applications prefixes. [#3701](https://github.com/aio-libs/aiohttp/issues/3701) <https://github.com/aio-libs/aiohttp/issues/3701>_

    ... (truncated)

    Commits
    • 0a26acc Bump aiohttp to v3.7.4 for a security release
    • 021c416 Merge branch 'ghsa-v6wp-4m6f-gcjg' into master
    • 4ed7c25 Bump chardet from 3.0.4 to 4.0.0 (#5333)
    • b61f0fd Fix how pure-Python HTTP parser interprets //
    • 5c1efbc Bump pre-commit from 2.9.2 to 2.9.3 (#5322)
    • 0075075 Bump pygments from 2.7.2 to 2.7.3 (#5318)
    • 5085173 Bump multidict from 5.0.2 to 5.1.0 (#5308)
    • 5d1a75e Bump pre-commit from 2.9.0 to 2.9.2 (#5290)
    • 6724d0e Bump pre-commit from 2.8.2 to 2.9.0 (#5273)
    • c688451 Removed duplicate timeout parameter in ClientSession reference docs. (#5262) ...
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 1
  • Weighted Additive Loss

    Weighted Additive Loss

    Feature description

    Improve AdditiveLoss by adding the possibility of weighting the different losses.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 1
  • Telegram Bot support

    Telegram Bot support

    Feature description

    Add telegram bot support to send messages whenever an experiment breaks suddenly or the entire set of experiments (with the chance to have granularity) has finished

    Ideas on how to do it

    Once you create a bot, it should be easy to send a message to a particular chat.

    Additional info

    No response

    opened by diningphil 1
  • Accumulate predictions for metric that require global statistics

    Accumulate predictions for metric that require global statistics

    Feature description

    Metrics like AP require that all the samples of the train/valt/test dataset are taken into account when computing a score. We should add an option to allow, with greater usage of memory, to accumulate all predictions and target values until the end of an epoch, and subsequently compute the epoch score.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 1
  • Metric improvement

    Metric improvement

    Feature description

    Metric should always use the result from the _handle_reduction function, even when accumulating the number of samples.

    This is because _handle_reduction may do something more complicated in some metrics that override the function.

    Be careful, however, that this works for both "mean" and "sum" reductions.

    Ideas on how to do it

    No response

    Additional info

    No response

    opened by diningphil 1
  • Support for Single-experiment/Multi-GPU

    Support for Single-experiment/Multi-GPU

    Feature description

    Allow an experiment to run on multiple GPUs.

    Ideas on how to do it

    No response

    Additional info

    Remember to set the appropriate seed on all GPUs using torch.cuda.manual_seed_all

    opened by diningphil 1
Releases(v1.3.0.post2)
Owner
Federico Errica
Teaching Machines
Federico Errica
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥

face.evoLVe: High-Performance Face Recognition Library based on PaddlePaddle & PyTorch Evolve to be more comprehensive, effective and efficient for fa

Zhao Jian 3.1k Jan 04, 2023
ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs

ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs This is the code of paper ConE: Cone Embeddings for Multi-Hop Reasoning over Knowl

MIRA Lab 33 Dec 07, 2022
SwinTrack: A Simple and Strong Baseline for Transformer Tracking

SwinTrack This is the official repo for SwinTrack. A Simple and Strong Baseline Prerequisites Environment conda (recommended) conda create -y -n SwinT

LitingLin 196 Jan 04, 2023
L-Verse: Bidirectional Generation Between Image and Text

Far beyond learning long-range interactions of natural language, transformers are becoming the de-facto standard for many vision tasks with their power and scalabilty

Kim, Taehoon 102 Dec 21, 2022
A no-BS, dead-simple training visualizer for tf-keras

A no-BS, dead-simple training visualizer for tf-keras TrainingDashboard Plot inter-epoch and intra-epoch loss and metrics within a jupyter notebook wi

Vibhu Agrawal 3 May 28, 2021
GenshinMapAutoMarkTools - Tools To add/delete/refresh resources mark in Genshin Impact Map

使用说明 适配 windows7以上 64位 原神1920x1080窗口(其他分辨率后续适配) 待更新渊下宫 English version is to be

Zero_Circle 209 Dec 28, 2022
PyTorch Implement of Context Encoders: Feature Learning by Inpainting

Context Encoders: Feature Learning by Inpainting This is the Pytorch implement of CVPR 2016 paper on Context Encoders 1) Semantic Inpainting Demo Inst

321 Dec 25, 2022
RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving

RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving (AAAI2021). RTS3D is efficiency and accuracy s

71 Nov 29, 2022
(ImageNet pretrained models) The official pytorch implemention of the TPAMI paper "Res2Net: A New Multi-scale Backbone Architecture"

Res2Net The official pytorch implemention of the paper "Res2Net: A New Multi-scale Backbone Architecture" Our paper is accepted by IEEE Transactions o

Res2Net Applications 928 Dec 29, 2022
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

144 Dec 30, 2022
Final report with code for KAIST Course KSE 801.

Orthogonal collocation is a method for the numerical solution of partial differential equations

Chuanbo HUA 4 Apr 06, 2022
G-NIA model from "Single Node Injection Attack against Graph Neural Networks" (CIKM 2021)

Single Node Injection Attack against Graph Neural Networks This repository is our Pytorch implementation of our paper: Single Node Injection Attack ag

Shuchang Tao 18 Nov 21, 2022
Toward Multimodal Image-to-Image Translation

BicycleGAN Project Page | Paper | Video Pytorch implementation for multimodal image-to-image translation. For example, given the same night image, our

Jun-Yan Zhu 1.4k Dec 22, 2022
The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL), NeurIPS-2021

Directed Graph Contrastive Learning The PyTorch implementation of Directed Graph Contrastive Learning (DiGCL). In this paper, we present the first con

Tong Zekun 28 Jan 08, 2023
The repository offers the official implementation of our paper in PyTorch.

Cloth Interactive Transformer (CIT) Cloth Interactive Transformer for Virtual Try-On Bin Ren1, Hao Tang1, Fanyang Meng2, Runwei Ding3, Ling Shao4, Phi

Bingoren 49 Dec 01, 2022
🚀 An end-to-end ML applications using PyTorch, W&B, FastAPI, Docker, Streamlit and Heroku

🚀 An end-to-end ML applications using PyTorch, W&B, FastAPI, Docker, Streamlit and Heroku

Made With ML 82 Jun 26, 2022
Learning based AI for playing multi-round Koi-Koi hanafuda card games. Have fun.

Koi-Koi AI Learning based AI for playing multi-round Koi-Koi hanafuda card games. Platform Python PyTorch PySimpleGUI (for the interface playing vs AI

Sanghai Guan 10 Nov 20, 2022
Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks

Introduction This repository contains the modified caffe library and network architectures for our paper "Automated Melanoma Recognition in Dermoscopy

Lequan Yu 47 Nov 24, 2022
Diagnostic tests for linguistic capacities in language models

LM diagnostics This repository contains the diagnostic datasets and experimental code for What BERT is not: Lessons from a new suite of psycholinguist

61 Jan 02, 2023
TensorFlow2 Classification Model Zoo playing with TensorFlow2 on the CIFAR-10 dataset.

Training CIFAR-10 with TensorFlow2(TF2) TensorFlow2 Classification Model Zoo. I'm playing with TensorFlow2 on the CIFAR-10 dataset. Architectures LeNe

Chia-Hung Yuan 16 Sep 27, 2022