Python logging package for easy reproducible experimenting in research

Related tags

Loggingsmilelogging
Overview

smilelogging

Python logging package for easy reproducible experimenting in research.

Why you may need this package

This project is meant to provide an easy-to-use (as easy as possible) package to enable reproducible experimenting in research. Here is a struggling situation you may also encountered:

I am doing some project. I got a fatanstic idea some time (one week, one month, or even one year) ago. Now I am looking at the results of that experiment, but I just cannot reproduce them anymore. I cannot remember which script and what hyper-prarameters I used. Even worse, since then I've modified the code (a lot). I don't know where I messed it up...

If you do not use this package, usually, what you can do may be:

  • First, use Github to manage your code. Always run experiments after git commit.
  • Second, before each experiment, set up a unique experiment folder (with a unique ID to label that experiment -- we call it ExpID).
  • Third, when running an experiment, print your git commit ID (we call it CodeID) and arguments in the log.

Every result is uniquely binded with an ExpID, corresponding to a unique experiment folder. In that folder, CodeID and arguments are saved. So ideally, as long as we know the ExpID, we should be able to rerun the experiment under the same condition.

These steps are pretty simple, but if you implement them over and over again in each project, it can still be quite annoying. This package is meant to save you with basically 3~4 lines of code change.

Usage

Step 0: Install the package (>= python3.4)

# --upgrade to make sure you install the latest version
pip install smilelogging --upgrade

Step 1: Modify your code

Here we use the official PyTorch ImageNet example to give an example.

# 1. add this at the head of code
from smilelogging import Logger 

# 2. replace argument parser
parser = argparse.ArgumentParser(description='PyTorch ImageNet Training')  
==> change the above line to the following:
from smilelogging import argparser as parser

# 3. add logger and change print if necessary
args = parser.parse_args()
==> change the above line to the following 3:
args = parser.parse_args()
logger = Logger(args)
global print; print = logger.log_printer.logprint # change print function so that logs can be printed to a txt file

TIPS: overwriting the default python print func may not be a good practice, a better way may be logprint = logger.log_printer.logprint, and use it like logprint('Test accuracy: %.4f' % test_acc). This will print the log to a txt file at path log/log.txt.

Step 2: Run experiments

The original ImageNet training snippet is:

CUDA_VISIBLE_DEVICES=0 python main.py -a resnet18 [imagenet-folder with train and val folders]

Now, try this:

CUDA_VISIBLE_DEVICES=0 python main.py -a resnet18 [imagenet-folder with train and val folders] --project_name Scratch__resnet18__imagenet --screen_print

This snippet will set up an experiment folder under path Experiments/Scratch__resnet18__imagenet_XXX. That XXX thing is an ExpID automatically assigned by the time running this snippet. Below is an example on my PC:

Experiments/
└── Scratch__resnet18__imagenet_SERVER138-20211021-145936
    ├── gen_img
    ├── log
    │   ├── git_status.txt
    │   ├── gpu_info.txt
    │   ├── log.txt
    │   ├── params.yaml
    │   └── plot
    └── weights

CongratsYou're all set

As seen, there will be 3 folders automatically created: gen_img, weights, log. Log text will be saved in log/log.txt, arguments saved in log/params.yaml and in the head of log/log.txt. Below is an example of the first few lines of log/log.txt:

cd /home/wanghuan/Projects/TestProject
CUDA_VISIBLE_DEVICES=1 python main.py -a resnet18 /home/wanghuan/Dataset/ILSVRC/Data/CLS-LOC/ --project Scracth_resnet18_imagenet --screen_print

('arch': resnet18) ('batch_size': 256) ('cache_ignore': ) ('CodeID': f30e6078) ('data': /home/wanghuan/Dataset/ILSVRC/Data/CLS-LOC/) ('debug': False) ('dist_backend': nccl) ('dist_url': tcp://224.66.41.62:23456) ('epochs': 90) ('evaluate': False) ('gpu': None) ('lr': 0.1) ('momentum': 0.9) ('multiprocessing_distributed': False) ('note': ) ('pretrained': False) ('print_freq': 10) ('project_name': Scracth_resnet18_imagenet) ('rank': -1) ('resume': ) ('screen_print': True) ('seed': None) ('start_epoch': 0) ('weight_decay': 0.0001) ('workers': 4) ('world_size': -1)

[180853 22509 2021/10/21-18:08:54] ==> Caching various config files to 'Experiments/Scracth_resnet18_imagenet_SERVER138-20211021-180853/.caches'

Note, it tells us

  • (1) where is the code
  • (2) what snippet is used when running this experiment
  • (3) what arguments are used
  • (4) what is the CodeID -- useful when rolling back to prior code versions (git reset --hard )
  • (5) where the code files (*.py, *.json, *.yaml etc) are backuped -- note the log line "==> Caching various config files to ...". Ideally, CodeID is already enough to get previous code. Caching code files is a double insurance
  • (6) At the begining of each log line, the prefix "[180853 22509 2021/10/21-18:08:54]" is automatically added if the logprint func is used for print, where 180853 is short for the full ExpID SERVER138-20211021-180853, 22509 is the program pid (useful if you want to kill the job, e.g., kill -9 22509)

More explanantions about the folder setting

The weights folder is supposed to store the checkpoints during training; and gen_img is supposed to store the generated images during training (like in a generative model project). To use them in the code:

weights_path = logger.weights_path
gen_img_path = logger.gen_img_path

More explanantions about the arguments and more tips

  • --screen_print means the logs will also be print to the console (namely, your screen). If it is not used, the log will only be saved to log/log.txt, not printed to screen.
  • If you are debugging code, you may not want to create an experiment folder under Experiments. Then use --debug, for example:
CUDA_VISIBLE_DEVICES=0 python main.py -a resnet18 [imagenet-folder with train and val folders] --debug

This will save all the logs in Debug_Dir, instead of Experiments (Experiments is expected to store the formal experiment results).

TODO

  • Add training and testing metric (like accuracy, PSNR) plots.

Collaboration / Suggestions

Currently, this is still a baby project. Any collaboration or suggestions are welcome to Huan Wang (Email: [email protected]).

You might also like...
Simple and versatile logging library for python 3.6 above

Simple and versatile logging library for python 3.6 above

Stand-alone parser for User Access Logging from Server 2012 and newer systems
Stand-alone parser for User Access Logging from Server 2012 and newer systems

KStrike Stand-alone parser for User Access Logging from Server 2012 and newer systems BriMor Labs KStrike This script will parse data from the User Ac

Logging system for the TPC software.

tpc_logger Logging system for the TPC software. The TPC Logger class provides a singleton for logging information within C++ code or in the python API

Outlog it's a library to make logging a simple task

outlog Outlog it's a library to make logging a simple task!. I'm a lazy python user, the times that i do logging on my apps it's hard to do, a lot of

metovlogs is a very simple logging library

metovlogs is a very simple logging library. Setup is one line, then you can use it as a drop-in print replacement. Sane and useful log format out of the box. Best for small or early projects.

The easy way to send notifications
The easy way to send notifications

See changelog for recent changes Got an app or service and you want to enable your users to use notifications with their provider of choice? Working o

A simple package that allows you to save inputs & outputs as .log files

wolf_dot_log A simple package that allows you to save inputs & outputs as .log files pip install wolf_dot_log pip3 install wolf_dot_log |Instructions|

Pretty-print tabular data in Python, a library and a command-line utility. Repository migrated from bitbucket.org/astanin/python-tabulate.

python-tabulate Pretty-print tabular data in Python, a library and a command-line utility. The main use cases of the library are: printing small table

Progressbar 2 - A progress bar for Python 2 and Python 3 - "pip install progressbar2"

Text progress bar library for Python. Travis status: Coverage: Install The package can be installed through pip (this is the recommended method): pip

Releases(v0.2.3)
Owner
Huan Wang
B.E. and M.S. graduate from Zhejiang University, China. Now Ph.D. candidate at Northeastern, USA. I work on interpretable model compression.
Huan Wang
Structured Logging for Python

structlog makes logging in Python faster, less painful, and more powerful by adding structure to your log entries. It's up to you whether you want str

Hynek Schlawack 2.3k Jan 05, 2023
Python logging made (stupidly) simple

Loguru is a library which aims to bring enjoyable logging in Python. Did you ever feel lazy about configuring a logger and used print() instead?... I

13.7k Jan 02, 2023
Espion is a mini-keylogger tool that keeps track of all keys a user presses on his/her keyboard

Espion is a mini-keylogger tool that keeps track of all keys a user presses on his/her keyboard. The details get displayed on the terminal window and also stored in a log file.

Anurag.R.Simha 1 Apr 24, 2022
Python logging package for easy reproducible experimenting in research

smilelogging Python logging package for easy reproducible experimenting in research. Why you may need this package This project is meant to provide an

Huan Wang 20 Dec 23, 2022
Track Nano accounts and notify via log file or email

nano-address-notifier Track accounts and notify via log file or email Required python libs

Joohansson (Json) 4 Nov 08, 2021
metovlogs is a very simple logging library

metovlogs is a very simple logging library. Setup is one line, then you can use it as a drop-in print replacement. Sane and useful log format out of the box. Best for small or early projects.

Azat Akhmetov 1 Mar 01, 2022
A demo of Prometheus+Grafana for monitoring an ML model served with FastAPI.

ml-monitoring Jeremy Jordan This repository provides an example setup for monitoring an ML system deployed on Kubernetes.

Jeremy Jordan 176 Jan 01, 2023
Json Formatter for the standard python logger

This library is provided to allow standard python logging to output log data as json objects. With JSON we can make our logs more readable by machines and we can stop writing custom parsers for syslo

Zakaria Zajac 1.4k Jan 04, 2023
Token Logger with python

Oxy Token Stealer Features Grabs discord tokens Grabs chrome passwords Grabs edge passwords Nothing else, I don't feel like releasing full on malware

oxy 1 Feb 12, 2022
🐑 Syslog Simulator hazır veya kullanıcıların eklediği logları belirtilen adreslere ve port'a seçilen döngüde syslog ile gönderilmesini sağlayan araçtır. | 🇹🇷

syslogsimulator hazır ürün loglarını SIEM veya log toplayıcısına istediğiniz portta belirli sürelerde göndermeyi sağlayan küçük bir araçtır.

Enes Aydın 3 Sep 28, 2021
APT-Hunter is Threat Hunting tool for windows event logs

APT-Hunter is Threat Hunting tool for windows event logs which made by purple team mindset to provide detect APT movements hidden in the sea of windows event logs to decrease the time to uncover susp

824 Jan 08, 2023
Monitor creation, deletion and changes to LDAP objects live during your pentest or system administration!

LDAP Monitor Monitor creation, deletion and changes to LDAP objects live during your pentest or system administration! With this tool you can quickly

Podalirius 500 Dec 28, 2022
A cool logging replacement for Python.

Welcome to Logbook Travis AppVeyor Supported Versions Latest Version Test Coverage Logbook is a nice logging replacement. It should be easy to setup,

1.4k Nov 11, 2022
pyEventLogger - a simple Python Library for making customized Logs of certain events that occur in a program

pyEventLogger is a simple Python Library for making customized Logs of certain events that occur in a program. The logs can be fully customized and can be printed in colored format or can be stored i

Siddhesh Chavan 2 Nov 03, 2022
Log processor for nginx or apache that extracts user and user sessions and calculates other types of useful data for bot detection or traffic analysis

Log processor for nginx or apache that extracts user and user sessions and calculates other types of useful data for bot detection or traffic analysis

David Puerta Martín 1 Nov 11, 2021
The easy way to send notifications

See changelog for recent changes Got an app or service and you want to enable your users to use notifications with their provider of choice? Working o

Or Carmi 2.4k Dec 25, 2022
Summarize LSF job properties by parsing log files.

Summarize LSF job properties by parsing log files of workflows executed by Snakemake.

Kim 4 Jan 09, 2022
Monitoring plugin to check disk io with Icinga, Nagios and other compatible monitoring solutions

check_disk_io - Monitor disk io This is a monitoring plugin for Icinga, Nagios and other compatible monitoring solutions to check the disk io. It uses

DinoTools 3 Nov 15, 2022
Log4j alternative for Python

Log4p Log4p is the most secure logging library ever created in this and all other universes. Usage: import log4p log4p.log('"Wow, this library is sec

Isaak Uchakaev 15 Dec 16, 2022
This is a DemoCode for parsing through large log files and triggering an email whenever there's an error.

LogFileParserDemoCode This is a DemoCode for parsing through large log files and triggering an email whenever there's an error. There are a total of f

2 Jan 06, 2022