Pythonic particle-based (super-droplet) warm-rain/aqueous-chemistry cloud microphysics package with box, parcel & 1D/2D prescribed-flow examples in Python, Julia and Matlab

Overview

PySDM

Python 3 LLVM CUDA Linux OK macOS OK Windows OK Jupyter Maintenance OpenHub status
EU Funding PL Funding US Funding

License: GPL v3 Copyright

Github Actions Build Status Appveyor Build status Coverage Status
GitHub issues GitHub issues
GitHub issues GitHub issues
PyPI version API docs

PySDM is a package for simulating the dynamics of population of particles. It is intended to serve as a building block for simulation systems modelling fluid flows involving a dispersed phase, with PySDM being responsible for representation of the dispersed phase. Currently, the development is focused on atmospheric cloud physics applications, in particular on modelling the dynamics of particles immersed in moist air using the particle-based (a.k.a. super-droplet) approach to represent aerosol/cloud/rain microphysics. The package features a Pythonic high-performance implementation of the Super-Droplet Method (SDM) Monte-Carlo algorithm for representing collisional growth (Shima et al. 2009), hence the name.

PySDM has two alternative parallel number-crunching backends available: multi-threaded CPU backend based on Numba and GPU-resident backend built on top of ThrustRTC. The Numba backend (aliased CPU) features multi-threaded parallelism for multi-core CPUs, it uses the just-in-time compilation technique based on the LLVM infrastructure. The ThrustRTC backend (aliased GPU) offers GPU-resident operation of PySDM leveraging the SIMT parallelisation model. Using the GPU backend requires nVidia hardware and CUDA driver.

For an overview paper on PySDM v1 (and the preferred item to cite if using PySDM), see Bartman et al. 2021 arXiv e-print (submitted to JOSS). For a list of talks and other materials on PySDM, see the project wiki.

A pdoc-generated documentation of PySDM public API is maintained at: https://atmos-cloud-sim-uj.github.io/PySDM

Dependencies and Installation

PySDM dependencies are: Numpy, Numba, SciPy, Pint, chempy, pyevtk, ThrustRTC and CURandRTC.

To install PySDM using pip, use: pip install PySDM (or pip install git+https://github.com/atmos-cloud-sim-uj/PySDM.git to get updates beyond the latest release).

Conda users may use pip as well, see the Installing non-conda packages section in the conda docs. Dependencies of PySDM are available at the following conda channels:

For development purposes, we suggest cloning the repository and installing it using pip -e. Test-time dependencies are listed in the test-time-requirements.txt file.

PySDM examples are hosted in a separate repository and constitute the PySDM_examples package. The examples have additional dependencies listed in PySDM_examples package setup.py file. Running the examples requires the PySDM_examples package to be installed. Since the examples package includes Jupyter notebooks (and their execution requires write access), the suggested install and launch steps are:

git clone https://github.com/atmos-cloud-sim-uj/PySDM-examples.git
cd PySDM-examples
pip install -e .
jupyter-notebook

Alternatively, one can also install the examples package from pypi.org by using pip install PySDM-examples.

PySDM examples (Jupyter notebooks reproducing results from literature):

Examples are maintained at the PySDM-examples repository, see PySDM-examples README.md file for details.

animation

Hello-world coalescence example in Python, Julia and Matlab

In order to depict the PySDM API with a practical example, the following listings provide sample code roughly reproducing the Figure 2 from Shima et al. 2009 paper using PySDM from Python, Julia and Matlab. It is a Coalescence-only set-up in which the initial particle size spectrum is Exponential and is deterministically sampled to match the condition of each super-droplet having equal initial multiplicity:

Julia (click to expand)
using Pkg
Pkg.add("PyCall")
Pkg.add("Plots")
Pkg.add("PlotlyJS")

using PyCall
si = pyimport("PySDM.physics").si
ConstantMultiplicity = pyimport("PySDM.initialisation.sampling.spectral_sampling").ConstantMultiplicity
Exponential = pyimport("PySDM.initialisation.spectra").Exponential

n_sd = 2^15
initial_spectrum = Exponential(norm_factor=8.39e12, scale=1.19e5 * si.um^3)
attributes = Dict()
attributes["volume"], attributes["n"] = ConstantMultiplicity(spectrum=initial_spectrum).sample(n_sd)
Matlab (click to expand)
si = py.importlib.import_module('PySDM.physics').si;
ConstantMultiplicity = py.importlib.import_module('PySDM.initialisation.sampling.spectral_sampling').ConstantMultiplicity;
Exponential = py.importlib.import_module('PySDM.initialisation.spectra').Exponential;

n_sd = 2^15;
initial_spectrum = Exponential(pyargs(...
    'norm_factor', 8.39e12, ...
    'scale', 1.19e5 * si.um ^ 3 ...
));
tmp = ConstantMultiplicity(initial_spectrum).sample(int32(n_sd));
attributes = py.dict(pyargs('volume', tmp{1}, 'n', tmp{2}));
Python (click to expand)
from PySDM.physics import si
from PySDM.initialisation.sampling.spectral_sampling import ConstantMultiplicity
from PySDM.initialisation.spectra.exponential import Exponential

n_sd = 2 ** 15
initial_spectrum = Exponential(norm_factor=8.39e12, scale=1.19e5 * si.um ** 3)
attributes = {}
attributes['volume'], attributes['n'] = ConstantMultiplicity(initial_spectrum).sample(n_sd)

The key element of the PySDM interface is the Particulator class instances of which are used to manage the system state and control the simulation. Instantiation of the Particulator class is handled by the Builder as exemplified below:

Julia (click to expand)
Builder = pyimport("PySDM").Builder
Box = pyimport("PySDM.environments").Box
Coalescence = pyimport("PySDM.dynamics").Coalescence
Golovin = pyimport("PySDM.physics.coalescence_kernels").Golovin
CPU = pyimport("PySDM.backends").CPU
ParticleVolumeVersusRadiusLogarithmSpectrum = pyimport("PySDM.products").ParticleVolumeVersusRadiusLogarithmSpectrum

radius_bins_edges = 10 .^ range(log10(10*si.um), log10(5e3*si.um), length=32) 

builder = Builder(n_sd=n_sd, backend=CPU())
builder.set_environment(Box(dt=1 * si.s, dv=1e6 * si.m^3))
builder.add_dynamic(Coalescence(kernel=Golovin(b=1.5e3 / si.s)))
products = [ParticleVolumeVersusRadiusLogarithmSpectrum(radius_bins_edges=radius_bins_edges, name="dv/dlnr")] 
particulator = builder.build(attributes, products)
Matlab (click to expand)
Builder = py.importlib.import_module('PySDM').Builder;
Box = py.importlib.import_module('PySDM.environments').Box;
Coalescence = py.importlib.import_module('PySDM.dynamics').Coalescence;
Golovin = py.importlib.import_module('PySDM.physics.coalescence_kernels').Golovin;
CPU = py.importlib.import_module('PySDM.backends').CPU;
ParticleVolumeVersusRadiusLogarithmSpectrum = py.importlib.import_module('PySDM.products').ParticleVolumeVersusRadiusLogarithmSpectrum;

radius_bins_edges = logspace(log10(10 * si.um), log10(5e3 * si.um), 32);

builder = Builder(pyargs('n_sd', int32(n_sd), 'backend', CPU()));
builder.set_environment(Box(pyargs('dt', 1 * si.s, 'dv', 1e6 * si.m ^ 3)));
builder.add_dynamic(Coalescence(pyargs('kernel', Golovin(1.5e3 / si.s))));
products = py.list({ ParticleVolumeVersusRadiusLogarithmSpectrum(pyargs( ...
  'radius_bins_edges', py.numpy.array(radius_bins_edges), ...
  'name', 'dv/dlnr' ...
)) });
particulator = builder.build(attributes, products);
Python (click to expand)
import numpy as np
from PySDM import Builder
from PySDM.environments import Box
from PySDM.dynamics import Coalescence
from PySDM.physics.coalescence_kernels import Golovin
from PySDM.backends import CPU
from PySDM.products import ParticleVolumeVersusRadiusLogarithmSpectrum

radius_bins_edges = np.logspace(np.log10(10 * si.um), np.log10(5e3 * si.um), num=32)

builder = Builder(n_sd=n_sd, backend=CPU())
builder.set_environment(Box(dt=1 * si.s, dv=1e6 * si.m ** 3))
builder.add_dynamic(Coalescence(kernel=Golovin(b=1.5e3 / si.s)))
products = [ParticleVolumeVersusRadiusLogarithmSpectrum(radius_bins_edges=radius_bins_edges, name='dv/dlnr')]
particulator = builder.build(attributes, products)

The backend argument may be set to CPU or GPU what translates to choosing the multi-threaded backend or the GPU-resident computation mode, respectively. The employed Box environment corresponds to a zero-dimensional framework (particle positions are not considered). The vectors of particle multiplicities n and particle volumes v are used to initialise super-droplet attributes. The Coalescence Monte-Carlo algorithm (Super Droplet Method) is registered as the only dynamic in the system. Finally, the build() method is used to obtain an instance of Particulator which can then be used to control time-stepping and access simulation state.

The run(nt) method advances the simulation by nt timesteps. In the listing below, its usage is interleaved with plotting logic which displays a histogram of particle mass distribution at selected timesteps:

Julia (click to expand)
rho_w = pyimport("PySDM.physics.constants_defaults").rho_w
using Plots; plotlyjs()

for step = 0:1200:3600
    particulator.run(step - particulator.n_steps)
    plot!(
        radius_bins_edges[1:end-1] / si.um,
        particulator.products["dv/dlnr"].get()[:] * rho_w / si.g,
        linetype=:steppost,
        xaxis=:log,
        xlabel="particle radius [µm]",
        ylabel="dm/dlnr [g/m^3/(unit dr/r)]",
        label="t = $step s"
    )   
end
savefig("plot.svg")
Matlab (click to expand)
rho_w = py.importlib.import_module('PySDM.physics.constants_defaults').rho_w;

for step = 0:1200:3600
    particulator.run(int32(step - particulator.n_steps));
    x = radius_bins_edges / si.um;
    y = particulator.products{"dv/dlnr"}.get() * rho_w / si.g;
    stairs(...
        x(1:end-1), ... 
        double(py.array.array('d',py.numpy.nditer(y))), ...
        'DisplayName', sprintf("t = %d s", step) ...
    );
    hold on
end
hold off
set(gca,'XScale','log');
xlabel('particle radius [µm]')
ylabel("dm/dlnr [g/m^3/(unit dr/r)]")
legend()
Python (click to expand)
from PySDM.physics.constants_defaults import rho_w
from matplotlib import pyplot

for step in [0, 1200, 2400, 3600]:
    particulator.run(step - particulator.n_steps)
    pyplot.step(x=radius_bins_edges[:-1] / si.um,
                y=particulator.products['dv/dlnr'].get()[0] * rho_w / si.g,
                where='post', label=f"t = {step}s")

pyplot.xscale('log')
pyplot.xlabel('particle radius [µm]')
pyplot.ylabel("dm/dlnr [g/m$^3$/(unit dr/r)]")
pyplot.legend()
pyplot.savefig('readme.png')

The resultant plot (generated with the Python code) looks as follows:

plot

Hello-world condensation example in Python, Julia and Matlab

In the following example, a condensation-only setup is used with the adiabatic Parcel environment. An initial Lognormal spectrum of dry aerosol particles is first initialised to equilibrium wet size for the given initial humidity. Subsequent particle growth due to Condensation of water vapour (coupled with the release of latent heat) causes a subset of particles to activate into cloud droplets. Results of the simulation are plotted against vertical ParcelDisplacement and depict the evolution of PeakSupersaturation, EffectiveRadius, ParticleConcentration and the WaterMixingRatio .

Julia (click to expand)
using PyCall
using Plots; plotlyjs()
si = pyimport("PySDM.physics").si
spectral_sampling = pyimport("PySDM.initialisation.sampling").spectral_sampling
discretise_multiplicities = pyimport("PySDM.initialisation").discretise_multiplicities
Lognormal = pyimport("PySDM.initialisation.spectra").Lognormal
equilibrate_wet_radii = pyimport("PySDM.initialisation").equilibrate_wet_radii
CPU = pyimport("PySDM.backends").CPU
AmbientThermodynamics = pyimport("PySDM.dynamics").AmbientThermodynamics
Condensation = pyimport("PySDM.dynamics").Condensation
Parcel = pyimport("PySDM.environments").Parcel
Builder = pyimport("PySDM").Builder
Formulae = pyimport("PySDM").Formulae
products = pyimport("PySDM.products")

env = Parcel(
    dt=.25 * si.s,
    mass_of_dry_air=1e3 * si.kg,
    p0=1122 * si.hPa,
    q0=20 * si.g / si.kg,
    T0=300 * si.K,
    w= 2.5 * si.m / si.s
)
spectrum = Lognormal(norm_factor=1e4/si.mg, m_mode=50*si.nm, s_geom=1.4)
kappa = .5 * si.dimensionless
cloud_range = (.5 * si.um, 25 * si.um)
output_interval = 4
output_points = 40
n_sd = 256

formulae = Formulae()
builder = Builder(backend=CPU(formulae), n_sd=n_sd)
builder.set_environment(env)
builder.add_dynamic(AmbientThermodynamics())
builder.add_dynamic(Condensation())

r_dry, specific_concentration = spectral_sampling.Logarithmic(spectrum).sample(n_sd)
v_dry = formulae.trivia.volume(radius=r_dry)
r_wet = equilibrate_wet_radii(r_dry, env, kappa * v_dry)

attributes = Dict()
attributes["n"] = discretise_multiplicities(specific_concentration * env.mass_of_dry_air)
attributes["dry volume"] = v_dry
attributes["kappa times dry volume"] = kappa * v_dry
attributes["volume"] = formulae.trivia.volume(radius=r_wet) 

particulator = builder.build(attributes, products=[
    products.PeakSupersaturation(name="S_max", unit="%"),
    products.EffectiveRadius(name="r_eff", unit="um", radius_range=cloud_range),
    products.ParticleConcentration(name="n_c_cm3", unit="cm^-3", radius_range=cloud_range),
    products.WaterMixingRatio(name="ql", unit="g/kg", radius_range=cloud_range),
    products.ParcelDisplacement(name="z")
])
    
cell_id=1
output = Dict()
for (_, product) in particulator.products
    output[product.name] = Array{Float32}(undef, output_points+1)
    output[product.name][1] = product.get()[cell_id]
end 
    
for step = 2:output_points+1
    particulator.run(steps=output_interval)
    for (_, product) in particulator.products
        output[product.name][step] = product.get()[cell_id]
    end 
end 

plots = []
ylbl = particulator.products["z"].unit
for (_, product) in particulator.products
    if product.name != "z"
        append!(plots, [plot(output[product.name], output["z"], ylabel=ylbl, xlabel=product.unit, title=product.name)])
    end
    global ylbl = ""
end
plot(plots..., layout=(1, length(output)-1))
savefig("parcel.svg")
Matlab (click to expand)
si = py.importlib.import_module('PySDM.physics').si;
spectral_sampling = py.importlib.import_module('PySDM.initialisation.sampling').spectral_sampling;
discretise_multiplicities = py.importlib.import_module('PySDM.initialisation').discretise_multiplicities;
Lognormal = py.importlib.import_module('PySDM.initialisation.spectra').Lognormal;
equilibrate_wet_radii = py.importlib.import_module('PySDM.initialisation').equilibrate_wet_radii;
CPU = py.importlib.import_module('PySDM.backends').CPU;
AmbientThermodynamics = py.importlib.import_module('PySDM.dynamics').AmbientThermodynamics;
Condensation = py.importlib.import_module('PySDM.dynamics').Condensation;
Parcel = py.importlib.import_module('PySDM.environments').Parcel;
Builder = py.importlib.import_module('PySDM').Builder;
Formulae = py.importlib.import_module('PySDM').Formulae;
products = py.importlib.import_module('PySDM.products');

env = Parcel(pyargs( ...
    'dt', .25 * si.s, ...
    'mass_of_dry_air', 1e3 * si.kg, ...
    'p0', 1122 * si.hPa, ...
    'q0', 20 * si.g / si.kg, ...
    'T0', 300 * si.K, ...
    'w', 2.5 * si.m / si.s ...
));
spectrum = Lognormal(pyargs('norm_factor', 1e4/si.mg, 'm_mode', 50 * si.nm, 's_geom', 1.4));
kappa = .5;
cloud_range = py.tuple({.5 * si.um, 25 * si.um});
output_interval = 4;
output_points = 40;
n_sd = 256;

formulae = Formulae();
builder = Builder(pyargs('backend', CPU(formulae), 'n_sd', int32(n_sd)));
builder.set_environment(env);
builder.add_dynamic(AmbientThermodynamics());
builder.add_dynamic(Condensation());

tmp = spectral_sampling.Logarithmic(spectrum).sample(int32(n_sd));
r_dry = tmp{1};
v_dry = formulae.trivia.volume(pyargs('radius', r_dry));
specific_concentration = tmp{2};
r_wet = equilibrate_wet_radii(r_dry, env, kappa * v_dry);

attributes = py.dict(pyargs( ...
    'n', discretise_multiplicities(specific_concentration * env.mass_of_dry_air), ...
    'dry volume', v_dry, ...
    'kappa times dry volume', kappa * v_dry, ... 
    'volume', formulae.trivia.volume(pyargs('radius', r_wet)) ...
));

particulator = builder.build(attributes, py.list({ ...
    products.PeakSupersaturation(pyargs('name', 'S_max', 'unit', '%')), ...
    products.EffectiveRadius(pyargs('name', 'r_eff', 'unit', 'um', 'radius_range', cloud_range)), ...
    products.ParticleConcentration(pyargs('name', 'n_c_cm3', 'unit', 'cm^-3', 'radius_range', cloud_range)), ...
    products.WaterMixingRatio(pyargs('name', 'ql', 'unit', 'g/kg', 'radius_range', cloud_range)) ...
    products.ParcelDisplacement(pyargs('name', 'z')) ...
}));

cell_id = int32(0);
output_size = [output_points+1, length(py.list(particulator.products.keys()))];
output_types = repelem({'double'}, output_size(2));
output_names = [cellfun(@string, cell(py.list(particulator.products.keys())))];
output = table(...
    'Size', output_size, ...
    'VariableTypes', output_types, ...
    'VariableNames', output_names ...
);
for pykey = py.list(keys(particulator.products))
    get = py.getattr(particulator.products{pykey{1}}.get(), '__getitem__');
    key = string(pykey{1});
    output{1, key} = get(cell_id);
end

for i=2:output_points+1
    particulator.run(pyargs('steps', int32(output_interval)));
    for pykey = py.list(keys(particulator.products))
        get = py.getattr(particulator.products{pykey{1}}.get(), '__getitem__');
        key = string(pykey{1});
        output{i, key} = get(cell_id);
    end
end

i=1;
for pykey = py.list(keys(particulator.products))
    product = particulator.products{pykey{1}};
    if string(product.name) ~= "z"
        subplot(1, width(output)-1, i);
        plot(output{:, string(pykey{1})}, output.z, '-o');
        title(string(product.name), 'Interpreter', 'none');
        xlabel(string(product.unit));
    end
    if i == 1
        ylabel(string(particulator.products{"z"}.unit));
    end
    i=i+1;
end
saveas(gcf, "parcel.png");
Python (click to expand)
from matplotlib import pyplot
from PySDM.physics import si
from PySDM.initialisation import discretise_multiplicities, equilibrate_wet_radii
from PySDM.initialisation.spectra import Lognormal
from PySDM.initialisation.sampling import spectral_sampling
from PySDM.backends import CPU
from PySDM.dynamics import AmbientThermodynamics, Condensation
from PySDM.environments import Parcel
from PySDM import Builder, Formulae, products

env = Parcel(
  dt=.25 * si.s,
  mass_of_dry_air=1e3 * si.kg,
  p0=1122 * si.hPa,
  q0=20 * si.g / si.kg,
  T0=300 * si.K,
  w=2.5 * si.m / si.s
)
spectrum = Lognormal(norm_factor=1e4 / si.mg, m_mode=50 * si.nm, s_geom=1.5)
kappa = .5 * si.dimensionless
cloud_range = (.5 * si.um, 25 * si.um)
output_interval = 4
output_points = 40
n_sd = 256

formulae = Formulae()
builder = Builder(backend=CPU(formulae), n_sd=n_sd)
builder.set_environment(env)
builder.add_dynamic(AmbientThermodynamics())
builder.add_dynamic(Condensation())

r_dry, specific_concentration = spectral_sampling.Logarithmic(spectrum).sample(n_sd)
v_dry = formulae.trivia.volume(radius=r_dry)
r_wet = equilibrate_wet_radii(r_dry, env, kappa * v_dry)

attributes = {
  'n': discretise_multiplicities(specific_concentration * env.mass_of_dry_air),
  'dry volume': v_dry,
  'kappa times dry volume': kappa * v_dry,
  'volume': formulae.trivia.volume(radius=r_wet)
}

particulator = builder.build(attributes, products=[
  products.PeakSupersaturation(name='S_max', unit='%'),
  products.EffectiveRadius(name='r_eff', unit='um', radius_range=cloud_range),
  products.ParticleConcentration(name='n_c_cm3', unit='cm^-3', radius_range=cloud_range),
  products.WaterMixingRatio(name='ql', unit='g/kg', radius_range=cloud_range),
  products.ParcelDisplacement(name='z')
])

cell_id = 0
output = {product.name: [product.get()[cell_id]] for product in particulator.products.values()}

for step in range(output_points):
  particulator.run(steps=output_interval)
  for product in particulator.products.values():
    output[product.name].append(product.get()[cell_id])

fig, axs = pyplot.subplots(1, len(particulator.products) - 1, sharey="all")
for i, (key, product) in enumerate(particulator.products.items()):
  if key != 'z':
    axs[i].plot(output[key], output['z'], marker='.')
    axs[i].set_title(product.name)
    axs[i].set_xlabel(product.unit)
    axs[i].grid()
axs[0].set_ylabel(particulator.products['z'].unit)
pyplot.savefig('parcel.svg')

The resultant plot (generated with the Matlab code) looks as follows:

plot

Contributing, reporting issues, seeking support

Submitting new code to the project, please preferably use GitHub pull requests (or the PySDM-examples PR site if working on examples) - it helps to keep record of code authorship, track and archive the code review workflow and allows to benefit from the continuous integration setup which automates execution of tests with the newly added code.

As of now, the copyright to the entire PySDM codebase is with the Jagiellonian University, and code contributions are assumed to imply transfer of copyright. Should there be a need to make an exception, please indicate it when creating a pull request or contributing code in any other way. In any case, the license of the contributed code must be compatible with GPL v3.

Developing the code, we follow The Way of Python and the KISS principle. The codebase has greatly benefited from PyCharm code inspections and Pylint code analysis (which constitutes one of the CI workflows).

Issues regarding any incorrect, unintuitive or undocumented bahaviour of PySDM are best to be reported on the GitHub issue tracker. Feature requests are recorded in the "Ideas..." PySDM wiki page.

We encourage to use the GitHub Discussions feature (rather than the issue tracker) for seeking support in understanding, using and extending PySDM code.

Please use the PySDM issue-tracking and dicsussion infrastructure for PySDM-examples as well. We look forward to your contributions and feedback.

Credits:

The development and maintenance of PySDM is led by Sylwester Arabas. Piotr Bartman had been the architect and main developer of technological solutions in PySDM. The suite of examples shipped with PySDM includes contributions from researchers from Jagiellonian University departments of computer science, physics and chemistry; and from Caltech's Climate Modelling Alliance.

Development of PySDM had been initially supported by the EU through a grant of the Foundation for Polish Science) (POIR.04.04.00-00-5E1C/18) realised at the Jagiellonian University. The immersion freezing support in PySDM is developed with support from the US Department of Energy Atmospheric System Research programme through a grant realised at the University of Illinois at Urbana-Champaign.

copyright: Jagiellonian University
licence: GPL v3

Related resources and open-source projects

SDM patents (some expired, some withdrawn):

Other SDM implementations:

non-SDM probabilistic particle-based coagulation solvers

Python models with discrete-particle (moving-sectional) representation of particle size spectrum

Comments
  • Add non-constant surface tension

    Add non-constant surface tension

    I'm interested in modifying this line with calculation of the Kelvin term to allow for variable surface tension (const.sgm not constant anymore). https://github.com/atmos-cloud-sim-uj/PySDM/blob/81243955ae257038c3a427d2618d32972fd1de02/PySDM/backends/numba/numba_helpers.py#L101

    I want instead to replace it with an expression for the surface tension that allows for bulk-surface partitioning of surface-active organic species. Something along the lines of this compressed film model (https://doi.org/10.1126/science.aad4889) where the surface tension is a function of the wet radius, dry radius, organic fraction, and temperature. It seems like rw, rd, and T are available in this scope already (need to just pass r and rd to A(T)). I would then just need to add another attribute for f_org to describe the fraction of the aerosol particle that is organic. Do you foresee any issues doing this?

    in-progress 
    opened by claresinger 13
  • ThrustRTC internal error in new algorithmic_method kernel

    ThrustRTC internal error in new algorithmic_method kernel

    @slayoo @trontrytel In my quest to implement a breakup-like process in PySDM, I am encountering an error for the kernel launch for my new random fragmentation backend algorithmic method. Please see the branch at https://github.com/edejong-caltech/PySDM/tree/SLAMS-fragmentation with a minimum not-working-example in https://github.com/edejong-caltech/PySDM/blob/SLAMS-fragmentation/PySDM_tests/breakup_tests/gpu_issue.ipynb.

    In this implementation, breakup proceeds similarly to coalescence but takes an additional argument, n_fragments to scale the multiplicities and attributes. The breakup process returns expected output when n_fragments is returned deterministically, but I have added an additional method to _algorithmic_methods.py and additional random generator case that returns only a vector to allow for stochastic fragmentation.

    The Numba and FakeThrust backend for the new method SLAMS_fragmentation execute and produce expected output, but the kernel launch for the ThrustRTC backend produces an error: an internal error happend (screenshot included). I have not been able to trace the source of the argument error to launch_n that is leading to this internal error as the call n_for_launch_n does not exist in a readable form within the ThrustRTC library.

    Screen Shot 2021-05-28 at 2 53 19 PM

    Here are the current package versions loaded in my environment: _libgcc_mutex 0.1 main
    argon2-cffi 20.1.0 py38h27cfd23_1
    async_generator 1.10 pyhd3eb1b0_0
    attrs 20.3.0 pyhd3eb1b0_0
    backcall 0.2.0 pyhd3eb1b0_0
    blas 1.0 mkl
    bleach 3.3.0 pyhd3eb1b0_0
    ca-certificates 2021.4.13 h06a4308_1
    certifi 2020.12.5 py38h06a4308_0
    cffi 1.14.5 py38h261ae71_0
    cycler 0.10.0 py38_0
    dbus 1.13.18 hb2f20db_0
    decorator 5.0.6 pyhd3eb1b0_0
    defusedxml 0.7.1 pyhd3eb1b0_0
    entrypoints 0.3 py38_0
    expat 2.3.0 h2531618_2
    fontconfig 2.13.1 h6c09931_0
    freetype 2.10.4 h5ab3b9f_0
    glib 2.68.1 h36276a3_0
    gst-plugins-base 1.14.0 h8213a91_2
    gstreamer 1.14.0 h28cd5cc_2
    icu 58.2 he6710b0_3
    importlib-metadata 3.10.0 py38h06a4308_0
    importlib_metadata 3.10.0 hd3eb1b0_0
    intel-openmp 2021.2.0 h06a4308_610
    ipykernel 5.3.4 py38h5ca1d4c_0
    ipython 7.22.0 py38hb070fc8_0
    ipython_genutils 0.2.0 pyhd3eb1b0_1
    jedi 0.17.0 py38_0
    jinja2 2.11.3 pyhd3eb1b0_0
    jpeg 9b h024ee3a_2
    jsonschema 3.2.0 py_2
    jupyter_client 6.1.12 pyhd3eb1b0_0
    jupyter_core 4.7.1 py38h06a4308_0
    jupyterlab_pygments 0.1.2 py_0
    kiwisolver 1.3.1 py38h2531618_0
    lcms2 2.12 h3be6417_0
    ld_impl_linux-64 2.33.1 h53a641e_7
    libffi 3.3 he6710b0_2
    libgcc-ng 9.1.0 hdf63c60_0
    libgfortran-ng 7.3.0 hdf63c60_0
    libpng 1.6.37 hbc83047_0
    libsodium 1.0.18 h7b6447c_0
    libstdcxx-ng 9.1.0 hdf63c60_0
    libtiff 4.1.0 h2733197_1
    libuuid 1.0.3 h1bed415_2
    libxcb 1.14 h7b6447c_0
    libxml2 2.9.10 hb55368b_3
    lz4-c 1.9.3 h2531618_0
    markupsafe 1.1.1 py38h7b6447c_0
    matplotlib 3.3.4 py38h06a4308_0
    matplotlib-base 3.3.4 py38h62a2d02_0
    mistune 0.8.4 py38h7b6447c_1000
    mkl 2021.2.0 h06a4308_296
    mkl-service 2.3.0 py38h27cfd23_1
    mkl_fft 1.3.0 py38h42c9631_2
    mkl_random 1.2.1 py38ha9443f7_2
    nb_conda_kernels 2.3.1 py38h06a4308_0
    nbclient 0.5.3 pyhd3eb1b0_0
    nbconvert 6.0.7 py38_0
    nbformat 5.1.3 pyhd3eb1b0_0
    ncurses 6.2 he6710b0_1
    nest-asyncio 1.5.1 pyhd3eb1b0_0
    notebook 6.3.0 py38h06a4308_0
    numpy 1.20.1 py38h93e21f0_0
    numpy-base 1.20.1 py38h7d8b39e_0
    olefile 0.46 py_0
    openssl 1.1.1k h27cfd23_0
    packaging 20.9 pyhd3eb1b0_0
    pandoc 2.12 h06a4308_0
    pandocfilters 1.4.3 py38h06a4308_1
    parso 0.8.2 pyhd3eb1b0_0
    pcre 8.44 he6710b0_0
    pexpect 4.8.0 pyhd3eb1b0_3
    pickleshare 0.7.5 pyhd3eb1b0_1003
    pillow 8.2.0 py38he98fc37_0
    pip 21.0.1 py38h06a4308_0
    prometheus_client 0.10.1 pyhd3eb1b0_0
    prompt-toolkit 3.0.17 pyh06a4308_0
    ptyprocess 0.7.0 pyhd3eb1b0_2
    pycparser 2.20 py_2
    pygments 2.8.1 pyhd3eb1b0_0
    pyparsing 2.4.7 pyhd3eb1b0_0
    pyqt 5.9.2 py38h05f1152_4
    pyrsistent 0.17.3 py38h7b6447c_0
    pysdm 1.4.dev194+gb653e8e.d20210517 dev_0 python 3.8.8 hdb3f193_5
    python-dateutil 2.8.1 pyhd3eb1b0_0
    pyzmq 20.0.0 py38h2531618_1
    qt 5.9.7 h5867ecd_1
    readline 8.1 h27cfd23_0
    scipy 1.6.2 py38had2a1c9_1
    send2trash 1.5.0 pyhd3eb1b0_1
    setuptools 52.0.0 py38h06a4308_0
    sip 4.19.13 py38he6710b0_0
    six 1.15.0 py38h06a4308_0
    sqlite 3.35.4 hdfb4753_0
    tbb 2021.2.0 pypi_0 pypi terminado 0.9.4 py38h06a4308_0
    testpath 0.4.4 pyhd3eb1b0_0
    tk 8.6.10 hbc83047_0
    tornado 6.1 py38h27cfd23_0
    traitlets 5.0.5 pyhd3eb1b0_0
    wcwidth 0.2.5 py_0
    webencodings 0.5.1 py38_1
    wheel 0.36.2 pyhd3eb1b0_0
    xz 5.2.5 h7b6447c_0
    zeromq 4.3.4 h2531618_0
    zipp 3.4.1 pyhd3eb1b0_0
    zlib 1.2.11 h7b6447c_3
    zstd 1.4.9 haebb681_0

    opened by edejong-caltech 12
  • Remove collection efficiency dependency on mesh size for 0D box setup

    Remove collection efficiency dependency on mesh size for 0D box setup

    My feeling is that a 0D/box setup should not require specification of any physical dimension or mesh size. However, in attempting to run such a scenario, an error is thrown in:

    PySDM/core.py in normalize(self, prob, norm_factor, subs)

    def normalize(self, prob, norm_factor, subs): ---> 63 factor = self.dt/subs/self.mesh.dv

    opened by edejong-caltech 10
  • cancel GitHub actions

    cancel GitHub actions

    Maybe we should add this to the workflows to cancel the actions on all but the latest push to a given branch? https://github.com/marketplace/actions/cancel-workflow-action

    opened by claresinger 8
  • checking constants

    checking constants

    https://github.com/atmos-cloud-sim-uj/PySDM/blob/fb619fa842ea419f6f038c532f07432f310f7e7a/PySDM/physics/aqueous_chemistry/support.py#L109

    I'm not sure because of the comment left above this line, but shouldn't the value be 7.5 * 1e7?

    opened by trontrytel 8
  • RuntimeWarning: invalid value encountered in subtract from scipy in BDF solver

    RuntimeWarning: invalid value encountered in subtract from scipy in BDF solver

    As capture in this build: https://ci.appveyor.com/project/slayoo/pysdm/builds/36311450/job/e8iy6hiy5or1csxh

    PySDM_tests\smoke_tests\Arabas_and_Shima_2017_Fig_5\test_conservation.py:44: 
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    PySDM_examples\Arabas_and_Shima_2017_Fig_5\simulation.py:70: in run
        self.core.run(self.n_substeps)
    PySDM\core.py:101: in run
        dynamic()
    PySDM\dynamics\condensation.py:44: in __call__
        self.core.condensation(
    PySDM_tests\smoke_tests\utils\bdf.py:32: in bdf_condensation
        Numba._condensation.py_func(
    PySDM\backends\numba\impl\_algorithmic_methods.py:259: in _condensation
        qv_new, thd_new, substeps_hint, ripening_flag = solver(
    PySDM_tests\smoke_tests\utils\bdf.py:84: in solve
        integ = scipy.integrate.solve_ivp(
    C:\Python38\lib\site-packages\scipy\integrate\_ivp\ivp.py:576: in solve_ivp
        message = solver.step()
    C:\Python38\lib\site-packages\scipy\integrate\_ivp\base.py:181: in step
        success, message = self._step_impl()
    _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    self = <scipy.integrate._ivp.bdf.BDF object at 0x0FFF93E8>
        def _step_impl(self):
            t = self.t
            D = self.D
        
            max_step = self.max_step
            min_step = 10 * np.abs(np.nextafter(t, self.direction * np.inf) - t)
            if self.h_abs > max_step:
                h_abs = max_step
                change_D(D, self.order, max_step / self.h_abs)
                self.n_equal_steps = 0
            elif self.h_abs < min_step:
                h_abs = min_step
                change_D(D, self.order, min_step / self.h_abs)
                self.n_equal_steps = 0
            else:
                h_abs = self.h_abs
        
            atol = self.atol
            rtol = self.rtol
            order = self.order
        
            alpha = self.alpha
            gamma = self.gamma
            error_const = self.error_const
        
            J = self.J
            LU = self.LU
            current_jac = self.jac is None
        
            step_accepted = False
            while not step_accepted:
                if h_abs < min_step:
                    return False, self.TOO_SMALL_STEP
        
                h = h_abs * self.direction
                t_new = t + h
        
                if self.direction * (t_new - self.t_bound) > 0:
                    t_new = self.t_bound
                    change_D(D, order, np.abs(t_new - t) / h_abs)
                    self.n_equal_steps = 0
                    LU = None
        
                h = t_new - t
                h_abs = np.abs(h)
        
                y_predict = np.sum(D[:order + 1], axis=0)
        
                scale = atol + rtol * np.abs(y_predict)
                psi = np.dot(D[1: order + 1].T, gamma[1: order + 1]) / alpha[order]
        
                converged = False
                c = h / alpha[order]
                while not converged:
                    if LU is None:
                        LU = self.lu(self.I - c * J)
        
                    converged, n_iter, y_new, d = solve_bdf_system(
                        self.fun, t_new, y_predict, c, psi, LU, self.solve_lu,
                        scale, self.newton_tol)
        
                    if not converged:
                        if current_jac:
                            break
                        J = self.jac(t_new, y_predict)
                        LU = None
                        current_jac = True
        
                if not converged:
                    factor = 0.5
                    h_abs *= factor
                    change_D(D, order, factor)
                    self.n_equal_steps = 0
                    LU = None
                    continue
        
                safety = 0.9 * (2 * NEWTON_MAXITER + 1) / (2 * NEWTON_MAXITER
                                                           + n_iter)
        
                scale = atol + rtol * np.abs(y_new)
                error = error_const[order] * d
                error_norm = norm(error / scale)
        
                if error_norm > 1:
                    factor = max(MIN_FACTOR,
                                 safety * error_norm ** (-1 / (order + 1)))
                    h_abs *= factor
                    change_D(D, order, factor)
                    self.n_equal_steps = 0
                    # As we didn't have problems with convergence, we don't
                    # reset LU here.
                else:
                    step_accepted = True
        
            self.n_equal_steps += 1
        
            self.t = t_new
            self.y = y_new
        
            self.h_abs = h_abs
            self.J = J
            self.LU = LU
        
            # Update differences. The principal relation here is
            # D^{j + 1} y_n = D^{j} y_n - D^{j} y_{n - 1}. Keep in mind that D
            # contained difference for previous interpolating polynomial and
            # d = D^{k + 1} y_n. Thus this elegant code follows.
    >       D[order + 2] = d - D[order + 1]
    E       RuntimeWarning: invalid value encountered in subtract
    C:\Python38\lib\site-packages\scipy\integrate\_ivp\bdf.py:403: RuntimeWarning
    
    CI 
    opened by slayoo 8
  • Features/chemical reaction

    Features/chemical reaction

    The aim here was to implement a chemical oxidation scheme for use in PySDM, based on the works of Dr. Anna Jaruga.

    The code successfully implements a "dynamic" that manages the chemical reactions. The code is designed to be as modular as possible, with the intention of introducing other, different reaction types.

    The main problem that the current implementation is facing (in the attached test) is the slow growth of the droplets, which causes them to be too concentrated for too long. This is a problem not only for numerical reasons, but also manifests due to the upper limit of how concentrated the droplets can be in order for the reaction to occur. Switching chemistry on and off can be seen on the graph as refractions in all curves.

    The problem may be in the test setup. In particular, in the original work, Dr. Jaruga uses 1kg of dry air. PySDM does not seem to deal with such a mass; even with chemistry turned off, strange bends occur in the LWC curve and by extension, pH. What is more, with such mass, the droplets are never dilute enough to allow the chemistry to "start up". As such, the test currently uses 100kg of dry air.

    At 100kg reasonable results can be abtained. However, they are still incompatible with what is presented in the work - the concentration of hydrogen ions coming from the starting compound is too high. This reveals a very low pH throughout the simulation. While in the work of Dr. Jaruga, the pH begins to increase strongly after about 200m (400s) and tends to a value of about 5, so in my simulations it is closer to 4, which means a 10-fold difference in concentration. Naturally, these results are not comparable due to the above difference in air mass. Ultimately, this suggests either a poor aerosol concentration, an incorrectly implemented pressure change (or lack thereof), or a problem with other microbiological parameters.

    Another (possibly related) issue is the extremely fast consumption of all available sulfous dioxide, effectively preventing the chemical processes of interest from taking place.

    This code was created as part of a course project for the course "Modelling of Atmospheric Clouds" at the Insitute of Mathematics and Computer Science, Jagiellonian University.

    opened by Golui 8
  • Update requirements.txt to include missing packages

    Update requirements.txt to include missing packages

    When installing via the instructions pip install git+https://github.com/atmos-cloud-sim-uj/PySDM.git in the README from a fresh conda environment w/ Python=3.8, pystrict was not installed so neither the tests nor the demo code could run.

    opened by darothen 7
  • Update readme example and draft of constant kernel

    Update readme example and draft of constant kernel

    One small fix to README example to run properly, plus attempt to implement a constant collision kernel. This first attempt uses the sum_pairs function as a workaround to size the output array properly; ideally the constant kernel would be more efficient and would avoid this workaround. Tutorials on specifying one's own kernel would be helpful: for example, how to create a polynomial kernel of the form f(r, r') rather than f(r+r').

    opened by edejong-caltech 7
  • PyPI distribution

    PyPI distribution

    • [x] Add setup.py https://packaging.python.org/tutorials/packaging-projects/#creating-setup-py https://packaging.python.org/guides/distributing-packages-using-setuptools/
    • [ ] Generating distribution archives: https://packaging.python.org/tutorials/packaging-projects/#generating-distribution-archives
    • [x] Register at PyPI: https://pypi.org
    • [ ] Upload distribution: https://packaging.python.org/tutorials/packaging-projects/#uploading-the-distribution-archives
    • [ ] Install pysdm package and test it
    opened by piotrbartman 7
  • Singularity testing

    Singularity testing

    For some reason my pip3 cannot find the versions of numpy and scipy specified in the requirements. I tried upgrading pip3 but it did not solve the problem.

    Would it be possible to downgrade those requirements?

    opened by trontrytel 6
  • Straub fragmentation function optimization: avoid using Storage `__getitem__`

    Straub fragmentation function optimization: avoid using Storage `__getitem__`

    There seem to be a significant overhead in using constructs like: https://github.com/atmos-cloud-sim-uj/PySDM/blob/ac01b4ea5e91f3045d9ba3acbe61c828ed42d589/PySDM/dynamics/collisions/breakup_fragmentations/straub2010.py#L50 which result in clling Storage __getitem__ millions of times in simulations like the 1D rainshaft one.

    A possible workaround would be to implement a method in Storage (or PairwiseStorage in this case) which would do the copying using some backend code (njitted in the case of Numba CPU backend).

    opened by slayoo 0
  • optimize `cell_start` iteration in `SUperDropletCountPerGridbox` product

    optimize `cell_start` iteration in `SUperDropletCountPerGridbox` product

    it uses Storage __getitem__ (a lot!) which is meant just for debugging, there is also a loop that likely should be @njitted

    Thanks @mstach60161 for profiling and noticing it!

    opened by slayoo 0
  • factor out Storage-related logic into a separate package

    factor out Storage-related logic into a separate package

    Storage class implementations are not really within the scope of PySDM. If we make them available as a separate package, there is potential for reuse (and improved maintenance).

    opened by slayoo 0
Releases(v2.15)
  • v2.15(Dec 30, 2022)

    • major updates in collision methods (mostly GPU support for breakup, but also CPU refactors and cleanups & improved test coverage) - kudos @abulenok!
    • numerous updates to FakeThrustRTC to support the above - kudos @abulenok
    • cleanups and docstrings in Formulae-related code - kudos @abulenok
    • clarification (significant) in displacement methods arg names (omega -> position_in_cell) - kudos @piotrbartman
    • fragmentation functions moved (partially) into physics submodule
    • introducing ConcentrationProduct base class supporting standard-temperature-and-pressure (STP) normalisation
    • Multiplicities::MAX_VALUE and unit test (+usage in breakup dynamics)
    • .zenodo.json file added to streamline Zenodo metadata provision
    Source code(tar.gz)
    Source code(zip)
  • v2.14(Nov 8, 2022)

    • fragmentation functions for GPU backend (@abulenok)
    • implement flag_zero_multiplicity on GPU backend within a Commons struct (@abulenok)
    • new pair and storage methods for GPU: min_pair, multiply_pair and divide_if_not_zero (@abulenok)
    • made ABIFM immersion freezing logic employ supersaturation constraint (to be consistent with analogous condition in INAS logic)
    • breakup algorithm: fix an issue with zero multiplicities (introduced max(round(nj), 1)) (@edejong-caltech)
    • shift from per-gridbox to per-kg units in rate product (@edejong-caltech)
    • smoke test for Bieli et al. example (@edejong-caltech)
    • make CPU find_pairs correctly handle the length argument (@abulenok)
    • added Python 3.10 to CI runs
    • added smoke tests with 0D simulations covering breakup (upcoming deJong et al. paper)
    Source code(tar.gz)
    Source code(zip)
  • v2.13(Oct 23, 2022)

    • smoke test comparing dry/wet equilibrium calculation against PyPartMC (thanks @zdaq12)
    • avoiding divide-by-zero warnings in EffectiveRadius product
    • fix physical unit in size-spectrum products (thanks @sajjadazimi)
    • better array-valued argument handling in Formulae methods using numba.vectorize (thanks @claresinger)
    • new method: Builder::replace_dynamic() (@edejong-caltech)
    • handling NVRTC_PATH env var to point ThrustRTC to non-standard location of nVidia libs (@abulenok)
    • new backend methods: min_pair, divide_if_not_zero (@edejong-caltech)
    • GPU support for freezing
    • 3D displacement incl. GPU support (@abulenok)
    • Straub fragmentation function (@edejong-caltech)
    • breakup algorithm improvements incl. reworked limiter logic, fragment_size instead of min_volume, fragmentation function updates (@edejong-caltech)
    • backends: fixed __init__ calls in multiple-inheritance contexts
    • FakeThrust fixes to better match ThrustRTC API (@abulenok)
    • make formulae available at attribute mapper scope so request_attribute can be called without constraints
    • storage, attribute, mesh common code: improved test coverage, cleanups, docstrings (@abulenok!)
    • multi-stage Github Actions workflow (pylint, no-numba unit tests, etc first, only then run all the tests
    • updates to make the code clean with newer versions of pylint
    • new tests for displacement, freezing, breakup, mesh, builder, formulae and storage logic
    Source code(tar.gz)
    Source code(zip)
  • v2.12(Aug 31, 2022)

    • major updates in breakup algorithmics (no more while loop, fixes) and test coverage - thanks @edejong-caltech!
    • new surface-tension model tests + code fixes and cleanups - thanks @claresinger
    • new product: averaged terminal velocity - thanks @sajjadazimi
    • new freezing-related products: IceNucleiConcentration, FrozenParticleConcentration
    • new attribute: WetToCriticalVolumeRatio
    • added Fierce diagrams as a test for differences between full and linearised kappa-Koehler formulae - thanks @nriemer for hint!
    Source code(tar.gz)
    Source code(zip)
  • v2.11(Aug 16, 2022)

    • fixing version indicators for dependencies in pypi.org-published files (regression introduced when automating package uploads)
    • option to toggle overflow warning in the breakup dynamic (thanks @edejong-caltech)
    • replacing r_crit<r_dry errors in wet-size equilibrium calculations with r_wet=r_dry setting (workaround for big-f_org/small-sized aerosols, thanks @claresinger)
    • immersion freezing cleaups
    • cleaning up imports from deprecated packages in SciPy (just subpackage naming changes)
    Source code(tar.gz)
    Source code(zip)
  • v2.10(Jun 19, 2022)

    • fragmentation limiters (by @edejong-caltech)
    • 1D VTK and nceCDF exporters (by @sajjadazimi)
    • PyPI release automation through GitHub Actions
    • introducing test-time-dependency on PyPartMC
    Source code(tar.gz)
    Source code(zip)
  • v2.9(Jun 1, 2022)

    • option to skip thd update in condensation dynamic added (for KiD example, kudos @sajjadazimi!)
    • JOSS PySDM v2 paper updates (kudos @edejong-caltech & @claresinger)
    Source code(tar.gz)
    Source code(zip)
  • v2.8(May 18, 2022)

    • single-column environment and examples beef up (kudos @sajjadazimi)
    • improved aerosol initialisation test coverage (kudos @claresinger)
    • API change in aerosol initialisation (aerosol.aerosol_modes -> aerosol.modes)
    • JOSS v2 paper progress
    Source code(tar.gz)
    Source code(zip)
  • v2.7(May 3, 2022)

    • handling of domain-leaving particles in displacement logic and 1D kinematic smoke tests updates (thanks to @sajjadazimi)
    • more tests for CCN activation (thanks to @claresinger)
    • code cleanups (including enforcing keyword parameters for functions with many args)
    Source code(tar.gz)
    Source code(zip)
  • v2.6(Apr 24, 2022)

    • common aerosol composition code in PySDM.initialisation (@claresinger)
    • breakup: counting breakup deficit instead of reporting error, vmin and nfmax thresholds (@edejong-caltech)
    • Area attribute and SimpleGeometric collision kernel (@edejong-caltech)
    • NumberSizeSpectrum and BreakupRateDeficitPerGridbox products (@edejong-caltech)
    • adaptive time-stepping in Displacement dynamic (criterion suggested by @mwest1066)
    • Feingold1988Frag fragmentation function (@edejong-caltech)
    • renaming default branch from master to main
    Source code(tar.gz)
    Source code(zip)
  • v2.5(Mar 9, 2022)

  • v2.4(Mar 8, 2022)

  • v2.3(Mar 3, 2022)

    • fix in collision dynamics ctors solving problem with undefined random seed on the GPU backend (thanks @s-shima for reporting it)
    • cleanups and new smoke tests for CCN activation representation (thanks @claresinger)
    • CI: add job cancellation workflow for GitHub Actions (thanks @claresinger)
    Source code(tar.gz)
    Source code(zip)
  • v2.2(Feb 24, 2022)

    • updates in Lowe et al. 2019 example (thanks @claresinger)
    • ambient relative humidity wrt ice (as option to the existing AmbientRelativeHumidity product)
    Source code(tar.gz)
    Source code(zip)
  • v2.1(Feb 23, 2022)

    • new example: parcel simulation based on a setup from Pyrcel documentation (kudos @claresinger)
    • adding dry option to ParticleVolumeVersusRadiusLogarithmSpectrum product
    • arbitrary-moment product factory
    • nbviewer badges in README.md
    • cleanups
    Source code(tar.gz)
    Source code(zip)
  • v2.0(Feb 17, 2022)

    tada:

    • Monte-Carlo super-particle-number-conserving collisional breakup representation (original algorithm and implementation by @edejong-caltech and @jb-mackay)

    misc:

    • Lowe et al. 2019 (Pruppacher & Klett) diffusion kinetics/thermics & latent heat formula (thanks @claresinger)
    • Lowe 1977 saturation vapour pressure formulae (thanks @claresinger)
    • Murphy and Koop 2005 saturation vapour pressure formulae (thanks @isilber)
    • new product: FlowVelocityComponent
    • new spectra: Gamma & Gaussian (thanks @edejong-caltech)
    • fixing race condition in coalescence counter increments (thanks @jb-mackay)
    • fixing non-rectangular domain handling in VTK exporter
    • switch from SciPy to PySDM backend root-solver in CompressedFilmRuehl surface tension (thanks @claresinger)
    • swithing to use single buffer for all products (less memory allocated)
    • better unit-test coverage for physics formulae incl. units (thanks @claresinger)
    Source code(tar.gz)
    Source code(zip)
  • v1.27(Mar 1, 2022)

    • moving terminal velocity and coalescence kernels out of "physics" (re-release to trigger DOI generation after enabling integration with Zenodo)
    Source code(tar.gz)
    Source code(zip)
  • v1.26(Jan 14, 2022)

    • new Szyszkowski-Langmuir surface tension model (and updates in Ruehl model) - kudos @claresinger!
    • JOSS paper branch merged into main one, added CI workflow to check the paper code
    • new cooling rate attribute and product
    • new max Courant number product
    • VTK exporter fixes
    • mass and heat accommodation coefficients alterable from within constants
    • cleanups
    Source code(tar.gz)
    Source code(zip)
  • v1.25(Jan 3, 2022)

    • major refactor around physical constants handling (Formulae ctor now accepts a dictionary of constant values to use instead of the defaults)
    • handling exdown -> pytest-codeblocks package name change in GA workflow files
    • first smoke test for immersion freezing using 2d kinematic setup (both singular and time-dependent)
    • binned terminal velocity product and a corresponding 2d kinematic GUI panel
    • mixed-phase support at Moist environment base class level
    • handling of non-spatial dimensions (e.g. histogram bins) in netCDF exporter
    • cleanups
    Source code(tar.gz)
    Source code(zip)
  • v1.24(Dec 14, 2021)

    • module docstring coverage reached 100% (checked with pylint in CI)
    • using python -We -m pdoc instead of pdoc to catch broken code links within docstring (and other issues)
    • catching OSError when importing ThrustRTC and issuing a warning (pdoc parsing works then even on machines without CUDA)
    • some minor code cleanups/refactors
    Source code(tar.gz)
    Source code(zip)
  • v1.23(Dec 12, 2021)

    • fixes and refactors around unit handling in the common code of the products subsystem (incl. new RateProduct base class)
    • numerous GPU code fixes (kudos to @Delcior for reporting it)
    • FakeThrust API updates to match ThrustRTC 0.3.17
    Source code(tar.gz)
    Source code(zip)
  • v1.22(Nov 25, 2021)

  • v1.21(Nov 22, 2021)

    • product subsystem refactor (incl. enforced SI units as defaults, pint handling of user-supplied unit conversion, shorter code, clearer directory structure, more common code, improved test coverage)
    • cleanups
    Source code(tar.gz)
    Source code(zip)
  • v1.20(Nov 11, 2021)

    • code cleanups & refactors
    • making pylint warnings fail GA workflow
    • phasing out PrecisionResolver - precision is now an init parameter of the GPU backend class
    Source code(tar.gz)
    Source code(zip)
  • v1.19(Oct 22, 2021)

    • VTK product export (kudos @abulenok)
    • new surface tension model draft added, relabelling existing models (kudos @claresinger)
    • cleanups (incl. graphics files linked from README - now showing files generated through GitHub Actions on a latest merge)
    Source code(tar.gz)
    Source code(zip)
  • v1.18(Oct 20, 2021)

    • more options around freezing spectrum (incl. Bigg 1953 formulation)
    • cleanups, better error messages in initialisation
    • smarter setitem support for Box environment
    Source code(tar.gz)
    Source code(zip)
  • v1.17(Oct 18, 2021)

  • v1.16(Oct 1, 2021)

  • v1.15(Sep 30, 2021)

    • moving backend instantiation from within Builder up to user scope
    • default random seed is now shuffled at PySDM import (but kept constant for CI runs)
    • cleanups
    Source code(tar.gz)
    Source code(zip)
Owner
Atmospheric Cloud Simulation Group @ Jagiellonian University
Atmospheric Cloud Simulation Group @ Jagiellonian University
A system used to detect whether a person is wearing a medical mask or not.

Mask_Detection_System A system used to detect whether a person is wearing a medical mask or not. To open the program, please follow these steps: Make

Mohamed Emad 0 Nov 17, 2022
“Robust Lightweight Facial Expression Recognition Network with Label Distribution Training”, AAAI 2021.

EfficientFace Zengqun Zhao, Qingshan Liu, Feng Zhou. "Robust Lightweight Facial Expression Recognition Network with Label Distribution Training". AAAI

Zengqun Zhao 119 Jan 08, 2023
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022
Pseudo lidar - (CVPR 2019) Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving

Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving This paper has been accpeted by Conference o

Yan Wang 881 Dec 27, 2022
PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time

PhysCap: Physically Plausible Monocular 3D Motion Capture in Real Time The implementation is based on SIGGRAPH Aisa'20. Dependencies Python 3.7 Ubuntu

soratobtai 124 Dec 08, 2022
CAPITAL: Optimal Subgroup Identification via Constrained Policy Tree Search

CAPITAL: Optimal Subgroup Identification via Constrained Policy Tree Search This repository is the official implementation of CAPITAL: Optimal Subgrou

Hengrui Cai 0 Oct 19, 2021
Official codes for the paper "Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech"

ResDAVEnet-VQ Official PyTorch implementation of Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech What is in this repo? M

Wei-Ning Hsu 21 Aug 23, 2022
NeWT: Natural World Tasks

NeWT: Natural World Tasks This repository contains resources for working with the NeWT dataset. ❗ At this time the binary tasks are not publicly avail

Visipedia 26 Oct 18, 2022
A semismooth Newton method for elliptic PDE-constrained optimization

sNewton4PDEOpt The Python module implements a semismooth Newton method for solving finite-element discretizations of the strongly convex, linear ellip

2 Dec 08, 2022
Self-supervised spatio-spectro-temporal represenation learning for EEG analysis

EEG-Oriented Self-Supervised Learning and Cluster-Aware Adaptation This repository provides a tensorflow implementation of a submitted paper: EEG-Orie

Wonjun Ko 4 Jun 09, 2022
Vision-Language Pre-training for Image Captioning and Question Answering

VLP This repo hosts the source code for our AAAI2020 work Vision-Language Pre-training (VLP). We have released the pre-trained model on Conceptual Cap

Luowei Zhou 373 Jan 03, 2023
SHIFT15M: multiobjective large-scale fashion dataset with distributional shifts

[arXiv] The main motivation of the SHIFT15M project is to provide a dataset that contains natural dataset shifts collected from a web service IQON, wh

ZOZO, Inc. 138 Nov 24, 2022
Open & Efficient for Framework for Aspect-based Sentiment Analysis

PyABSA - Open & Efficient for Framework for Aspect-based Sentiment Analysis Fast & Low Memory requirement & Enhanced implementation of Local Context F

YangHeng 567 Jan 07, 2023
A code generator from ONNX to PyTorch code

onnx-pytorch Generating pytorch code from ONNX. Currently support onnx==1.9.0 and torch==1.8.1. Installation From PyPI pip install onnx-pytorch From

Wenhao Hu 94 Jan 06, 2023
Code and models for "Rethinking Deep Image Prior for Denoising" (ICCV 2021)

DIP-denosing This is a code repo for Rethinking Deep Image Prior for Denoising (ICCV 2021). Addressing the relationship between Deep image prior and e

Computer Vision Lab. @ GIST 36 Dec 29, 2022
Scalable Optical Flow-based Image Montaging and Alignment

SOFIMA SOFIMA (Scalable Optical Flow-based Image Montaging and Alignment) is a tool for stitching, aligning and warping large 2d, 3d and 4d microscopy

Google Research 16 Dec 21, 2022
[CVPR2021] The source code for our paper 《Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Learning》.

TBE The source code for our paper "Removing the Background by Adding the Background: Towards Background Robust Self-supervised Video Representation Le

Jinpeng Wang 150 Dec 28, 2022
The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop.

AICITY2021_Track2_DMT The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop. Introduction

Hao Luo 91 Dec 21, 2022
《Towards High Fidelity Face Relighting with Realistic Shadows》(CVPR 2021)

Towards High Fidelity Face-Relighting with Realistic Shadows Andrew Hou, Ze Zhang, Michel Sarkis, Ning Bi, Yiying Tong, Xiaoming Liu. In CVPR, 2021. T

114 Dec 10, 2022
Code and data accompanying our SVRHM'21 paper.

Code and data accompanying our SVRHM'21 paper. Requires tensorflow 1.13, python 3.7, scikit-learn, and pytorch 1.6.0 to be installed. Python scripts i

5 Nov 17, 2021