PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j.

Related tags

Data Analysispostqf
Overview

PostQF

Copyright © 2022 Ralph Seichter

PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j. See the manual page's subsection titled "JSON object format" for details. PostQF offers convenient features for analysis and and cleanup of Postfix mail queues.

I have used the all-purpose JSON manipulation utility "jq" before, but found it inconvenient for everyday Postfix administration tasks. "jq" offers great flexibility and handles all sorts of JSON input, but it comes at the cost of complexity. PostQF is an alternative specifically tailored for easier access to Postfix queues.

To facilitate the use of Unix-like pipelines, PostQF usually reads from stdin and writes to stdout. Using command line arguments, you can override this behaviour and define one or more input files and/or an output file. Depending on the context, a horizontal dash - represents either stdin or stdout. See the command line usage description below.

Example usage

Find all messages in the deferred queue where the delay reason contains the string connection timed out.

postqueue -j | postqf -q deferred -d 'connection timed out'

Find all messages in the active or hold queues which have at least one recipient in the example.com or example.org domains, and write the matching JSON records into the file /tmp/output.

postqueue -j | postqf -q 'active|hold' -r '@example\.(com|org)' -o /tmp/output

Find all messages all queues for which the sender address is [email protected] or [email protected], and pipe the queue IDs to postsuper in order to place the matching messages on hold.

postqueue -j | postqf -s '^(alice|bob)@gmail\.com$' -i | postsuper -h -

Print the number of messages which arrived during the last 30 minutes.

postqueue -j | postqf -a 30m | wc -l

The final example assumes a directory /tmp/data with several files, each containing JSON output produced at some previous time. The command pipes all queue IDs which have ever been in the hold queue into the file idlist, relying on BASH wildcard expansion to generate a list of input files.

postqf -i -q hold /tmp/data/*.json > idlist

Filters

Queue entries can be easily filtered by

  • Arrival time
  • Delay reason
  • Queue name
  • Recipient address
  • Sender address

and combinations thereof, using regular expressions. Anchoring is optional, meaning that plain text is treated as a substring pattern.

The arrival time filters do not use regular expressions, but instead a human-readable representation of a time difference. The format is W unit, without spaces. W is a "whole number" (i.e. a number ≥ 0). The unit is a single letter among s, m, h, d (seconds, minutes, hours, days).

-b 3d and -a 90m are both examples of valid command line arguments. Note that arrival filters are interpreted relative to the time PostQF is run. The two examples signify "message arrived more than 3 days ago" (before timestamp) and "message arrived less than 90 minutes ago" (after timestamp), respectively.

Command line usage

postqf [-h] [-i] [-d [REGEX]] [-q [REGEX]] [-r [REGEX]] [-s [REGEX]]
       [-a [TS] | -b [TS]] [-o [PATH]] [PATH [PATH ...]]

positional arguments:
  PATH        Input file. Use a dash "-" for standard input.

optional arguments:
  -h, --help  show this help message and exit
  -i          ID output only
  -o [PATH]   Output file. Use a dash "-" for standard output.

Regular expression filters:
  -d [REGEX]  Delay reason filter
  -q [REGEX]  Queue name filter
  -r [REGEX]  Recipient address filter
  -s [REGEX]  Sender address filter

Arrival time filters (mutually exclusive):
  -a [TS]     Message arrived after TS
  -b [TS]     Message arrived before TS

Installation

The only installation requirement is Python 3.7 or newer. PostQF is distributed via PyPI.org and can usually be installed using pip. If this fails, or if both Python 2.x and 3.x are installed on your machine, use pip3 instead.

If possible, use the recommended installation with a Python virtual environment. Site-wide installation usually requires root privileges.

# Recommended: Installation using a Python virtual environment.
mkdir ~/postqf
cd ~/postqf
python3 -m venv .venv
source .venv/bin/activate
pip install -U pip postqf
# Alternative: Site-wide installation, requires root access.
sudo pip install postqf

The pip installation process adds a launcher executable postqf, either site-wide or in the Python virtual environment. In the latter case, the launcher will be placed into the directory .venv/bin which is automatically added to your PATH variable when you activate the venv environment as shown above.

Contact

The project is hosted on GitHub in the rseichter/postqf repository. If you have suggestions or run into any problems, please check the discussions section first. There is also an issue tracker available, and the build configuration file contains a contact email address.

You might also like...
Functional Data Analysis, or FDA, is the field of Statistics that analyses data that depend on a continuous parameter. Fancy data functions that will make your life as a data scientist easier.
Fancy data functions that will make your life as a data scientist easier.

WhiteBox Utilities Toolkit: Tools to make your life easier Fancy data functions that will make your life as a data scientist easier. Installing To ins

A Big Data ETL project in PySpark on the historical NYC Taxi Rides data

Processing NYC Taxi Data using PySpark ETL pipeline Description This is an project to extract, transform, and load large amount of data from NYC Taxi

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Utilize data analytics skills to solve real-world business problems using Humana’s big data

Humana-Mays-2021-HealthCare-Analytics-Case-Competition- The goal of the project is to utilize data analytics skills to solve real-world business probl

Python data processing, analysis, visualization, and data operations

Python This is a Python data processing, analysis, visualization and data operations of the source code warehouse, book ISBN: 9787115527592 Descriptio

PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift
PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift

Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift This project is composed of two parts: Part1 and Part2

Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials
Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Data Scientist Learning Plan Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Catalogue data - A Python Scripts to prepare catalogue data

catalogue_data Scripts to prepare catalogue data. Setup Clone this repo. Install

Comments
  • Permit using

    Permit using "before" and "after" time filters at the same time

    The command line arguments -a and -b are mutually exclusive as of release 0.1. If using both at the same time was permitted, users could express an interval, allowing searches for "message arrived between timestamps X and Y".

    enhancement 
    opened by rseichter 1
  • Support absolute time for before/after filter arguments

    Support absolute time for before/after filter arguments

    Command line arguments -a and -b currently allow only passing a time difference like 45m or 3d. It would be helpful to also support strings representing absolute points in time. Here is an example for how it might look when using the ISO 8601 format:

    $ date --iso-8601=s
    2022-01-23T22:10:56+01:00
    
    $ postqueue -b '2022-01-23T22:10:56+01:00'
    

    It would also be useful to allow passing epoch time arguments, because postqueue -j returns message arrival times as seconds since the Epoch.

    enhancement 
    opened by rseichter 1
Releases(0.5)
  • 0.5(Feb 6, 2022)

    In addition to filtering JSON input and producing JSON output in the process, PostQF can now also generate a number of simple reports to answer some frequently asked questions about message queue content. The following data can be shown in reports:

    • Delay reason
    • Recipient address
    • Recipient domain
    • Sender address
    • Sender domain
    Source code(tar.gz)
    Source code(zip)
  • 0.4(Feb 2, 2022)

  • 0.3(Jan 28, 2022)

    • Output is now correctly rendered as JSON instead of a Python dict.
    • Simplified installation process. In addition to pip based setup, an installation BASH script is now provided.
    Source code(tar.gz)
    Source code(zip)
  • 0.2(Jan 24, 2022)

    • Release 0.2 introduces the ability to use both -a and -b time filters simultaneously, in order to specify time intervals.
    • Time filter strings can now use ISO 8601 strings and Unix time in addition to relative time differences expressed in the form 42m or 2d. This allows users to also specify absolute points in time as arrival thresholds.
    Source code(tar.gz)
    Source code(zip)
  • 0.1(Jan 23, 2022)

Owner
Ralph Seichter
Ralph Seichter
Top 50 best selling books on amazon

It's a dashboard that shows the detailed information about each book in the top 50 best selling books on amazon over the last ten years

Nahla Tarek 1 Nov 18, 2021
A script to "SHUA" H1-2 map of Mercenaries mode of Hearthstone

lushi_script Introduction This script is to "SHUA" H1-2 map of Mercenaries mode of Hearthstone Installation Make sure you installed python=3.6. To in

210 Jan 02, 2023
This creates a ohlc timeseries from downloaded CSV files from NSE India website and makes a SQLite database for your research.

NSE-timeseries-form-CSV-file-creator-and-SQL-appender- This creates a ohlc timeseries from downloaded CSV files from National Stock Exchange India (NS

PILLAI, Amal 1 Oct 02, 2022
Data Science Environment Setup in single line

datascienv is package that helps your to setup your environment in single line of code with all dependency and it is also include pyforest that provide single line of import all required ml libraries

Ashish Patel 55 Dec 16, 2022
Elementary is an open-source data reliability framework for modern data teams. The first module of the framework is data lineage.

Data lineage made simple, reliable, and automated. Effortlessly track the flow of data, understand dependencies and analyze impact. Features Visualiza

898 Jan 09, 2023
Create HTML profiling reports from pandas DataFrame objects

Pandas Profiling Documentation | Slack | Stack Overflow Generates profile reports from a pandas DataFrame. The pandas df.describe() function is great

10k Jan 01, 2023
An Aspiring Drop-In Replacement for NumPy at Scale

Legate NumPy is a Legate library that aims to provide a distributed and accelerated drop-in replacement for the NumPy API on top of the Legion runtime. Using Legate NumPy you do things like run the f

Legate 502 Jan 03, 2023
Open-source Laplacian Eigenmaps for dimensionality reduction of large data in python.

Fast Laplacian Eigenmaps in python Open-source Laplacian Eigenmaps for dimensionality reduction of large data in python. Comes with an wrapper for NMS

17 Jul 09, 2022
talkbox is a scikit for signal/speech processing, to extend scipy capabilities in that domain.

talkbox is a scikit for signal/speech processing, to extend scipy capabilities in that domain.

David Cournapeau 76 Nov 30, 2022
This project is the implementation template for HW 0 and HW 1 for both the programming and non-programming tracks

This project is the implementation template for HW 0 and HW 1 for both the programming and non-programming tracks

Donald F. Ferguson 4 Mar 06, 2022
Yet Another Workflow Parser for SecurityHub

YAWPS Yet Another Workflow Parser for SecurityHub "Screaming pepper" by Rum Bucolic Ape is licensed with CC BY-ND 2.0. To view a copy of this license,

myoung34 8 Dec 22, 2022
A crude Hy handle on Pandas library

Quickstart Hyenas is a curde Hy handle written on top of Pandas API to allow for more elegant access to data-scientist's powerhouse that is Pandas. In

Peter Výboch 4 Sep 05, 2022
Evaluation of a Monocular Eye Tracking Set-Up

Evaluation of a Monocular Eye Tracking Set-Up As part of my master thesis, I implemented a new state-of-the-art model that is based on the work of Che

Pascal 19 Dec 17, 2022
Leverage Twitter API v2 to analyze tweet metrics such as impressions and profile clicks over time.

Tweetmetric Tweetmetric allows you to track various metrics on your most recent tweets, such as impressions, retweets and clicks on your profile. The

Mathis HAMMEL 29 Oct 18, 2022
Python package for processing UC module spectral data.

UC Module Python Package How To Install clone repo. cd UC-module pip install . How to Use uc.module.UC(measurment=str, dark=str, reference=str, heade

Nicolai Haaber Junge 1 Oct 20, 2021
Generate lookml for views from dbt models

dbt2looker Use dbt2looker to generate Looker view files automatically from dbt models. Features Column descriptions synced to looker Dimension for eac

lightdash 126 Dec 28, 2022
In this tutorial, raster models of soil depth and soil water holding capacity for the United States will be sampled at random geographic coordinates within the state of Colorado.

Raster_Sampling_Demo (Resulting graph of this demo) Background Sampling values of a raster at specific geographic coordinates can be done with a numbe

2 Dec 13, 2022
Pandas and Dask test helper methods with beautiful error messages.

beavis Pandas and Dask test helper methods with beautiful error messages. test helpers These test helper methods are meant to be used in test suites.

Matthew Powers 18 Nov 28, 2022
PyPDC is a Python package for calculating asymptotic Partial Directed Coherence estimations for brain connectivity analysis.

Python asymptotic Partial Directed Coherence and Directed Coherence estimation package for brain connectivity analysis. Free software: MIT license Doc

Heitor Baldo 3 Nov 26, 2022
Airflow ETL With EKS EFS Sagemaker

Airflow ETL With EKS EFS & Sagemaker (en desarrollo) Diagrama de la solución Imp

1 Feb 14, 2022