Powerful, efficient particle trajectory analysis in scientific Python.

Overview

freud

Citing freud PyPI conda-forge ReadTheDocs Binder GitHub-Stars

Overview

The freud Python library provides a simple, flexible, powerful set of tools for analyzing trajectories obtained from molecular dynamics or Monte Carlo simulations. High performance, parallelized C++ is used to compute standard tools such as radial distribution functions, correlation functions, order parameters, and clusters, as well as original analysis methods including potentials of mean force and torque (PMFTs) and local environment matching. The freud library supports many input formats and outputs NumPy arrays, enabling integration with the scientific Python ecosystem for many typical materials science workflows.

Resources

Citation

When using freud to process data for publication, please use this citation.

Installation

The easiest ways to install freud are using pip:

pip install freud-analysis

or conda:

conda install -c conda-forge freud

freud is also available via containers for Docker or Singularity. If you need more detailed information or wish to install freud from source, please refer to the Installation Guide to compile freud from source.

Examples

The freud library is called using Python scripts. Many core features are demonstrated in the freud documentation. The examples come in the form of Jupyter notebooks, which can also be downloaded from the freud examples repository or launched interactively on Binder. Below is a sample script that computes the radial distribution function for a simulation run with HOOMD-blue and saved into a GSD file.

import freud
import gsd.hoomd

# Create a freud compute object (RDF is the canonical example)
rdf = freud.density.RDF(bins=50, r_max=5)

# Load a GSD trajectory (see docs for other formats)
traj = gsd.hoomd.open('trajectory.gsd', 'rb')
for frame in traj:
    rdf.compute(system=frame, reset=False)

# Get bin centers, RDF data from attributes
r = rdf.bin_centers
y = rdf.rdf

Support and Contribution

Please visit our repository on GitHub for the library source code. Any issues or bugs may be reported at our issue tracker, while questions and discussion can be directed to our user forum. All contributions to freud are welcomed via pull requests!

Comments
  • CorrelationFunction behavior at 0

    CorrelationFunction behavior at 0

    Original report by Matthew Spellings (Bitbucket: mspells, GitHub: klarh).


    Currently, CorrelationFunction always sets the correlation function value at the first bin to be 0 (really, the value for the default constructor). This is acceptable if you're computing the self-correlation of one set to itself and don't want the first value, but if you're computing cross-correlation between two different sets, it is not necessarily what you want.

    Consider the following code, which computes the correlation of one point at the origin (with associated value 1) to a randomly-generated set of points with associated values of the magnitude of their radial distance from the origin. The correlation function should then be identity.

    import numpy as np
    import numpy.testing as npt
    from freud import trajectory, density
    
    ref = np.array([[0, 0, 0]], dtype=np.float32)
    refVals = np.array([1], dtype=np.float32)
    rmax = 10.0
    dr = 1.0
    num_points = 10000
    box_size = rmax*3.1
    points = np.random.random_sample((num_points,3)).astype(np.float32)*box_size - box_size/2
    pointRs = np.sqrt(np.sum(points**2, axis=-1))
    
    cf = density.FloatCF(trajectory.Box(box_size), rmax, dr)
    cf.compute(ref, refVals, points, pointRs)
    
    import matplotlib, matplotlib.pyplot as pp
    pp.plot(cf.getR(), cf.getRDF())
    pp.show(block=True)
    

    Currently, however, master sets the value at the first bin to be 0 unconditionally.

    enhancement 
    opened by bdice 28
  • Fix disjoint set size in EnvironmentMotifMatch

    Fix disjoint set size in EnvironmentMotifMatch

    Description

    EnvironmentMotifMatch constructs an environment for each particle and matches it to the environment of a reference particle (the motif). The size of the local environment for each particle may, however, be larger than the motif size, since any subset of that environment may be sufficient to match the motif. Therefore, the m_max_num_neigh parameter cannot be set to the size of the motif, but must instead be computed dynamically from the NeighborList.

    The underlying bug looks like it has always been present, but prior to freud 2.0 it was much harder to encounter because it would require a user to manually construct a NeighborList and then pass a value of k (the number of neighbors) to the constructor of the MatchEnv object that did not match the one used for constructing the NeighborList. The default constructed NeighborList within MatchEnv would always match the value of k correctly. With freud 2.0, it became much easier for users to specify alternative neighbor specifications with the new query syntax, and the value of k was no longer part of the class definition. #489 introduced the specific error that made it easy to hit this error case by always setting the value of EnvDisjointSet.m_max_num_neigh to the size of the motif, so any neighbor specification resulting in particles with more neighbors in the NeighborList than the size of the motif would trigger this error.

    Motivation and Context

    Resolves: #633

    How Has This Been Tested?

    Both the original script in #633 (with the data included there) and the example documented by @Charlottez112 on #978 seg fault on my machine without this change, and both of them complete successfully once these changes are included.

    Types of changes

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds or improves functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Documentation improvement (updates to user guides, docstrings, or developer docs)

    Checklist:

    • [x] I have read the CONTRIBUTING document.
    • [x] My code follows the code style of this project.
    • [ ] I have updated the documentation (if relevant).
    • [x] I have added tests that cover my changes (if relevant).
    • [x] All new and existing tests passed.
    • [ ] I have updated the credits.
    • [x] I have updated the Changelog.
    bug 
    opened by vyasr 25
  • Standardize the freud API

    Standardize the freud API

    From roadmap planning session with @vyasr and @bdice. To be assigned after #176 is closed.

    Standardize the freud API

    Once #176 (doc cleaning) is done, we should identify what we want APIs to look like for everything in freud.

    1. Review existing APIs
    2. Determine a standard for all modules to follow, with minimal exceptions
    3. Create a list of cases where the standard is not currently followed

    The behavior of compute/accumulate/property getters is one notable case that should be standardized.

    For methods that we want to remove (e.g. getRDF, which should be replaced by a property), we will remove that method from the documentation and add deprecation warnings for version 2.0.

    For any class/function where the signature has to change, we will make use of *args, **kwargs to take variable APIs and then dynamically resolve them. Deprecation warnings will be issued wherever appropriate.

    enhancement documentation 
    opened by bdice 25
  • Store and re-use computed neighbor vectors.

    Store and re-use computed neighbor vectors.

    Description

    I am dealing with an unusual case with Voronoi neighbors in a sparse system (only a few particles), where particles can be their own neighbors or share multiple bonds with the same neighbor. For this case, it's insufficient to identify neighbors by their distance. Instead, I need the actual vectors computed by the Voronoi tessellation.

    • Make vec3, vec2, quat constexpr-qualified (also constexpr implies inline).
    • Add vectors to NeighborBond, NeighborList.
    • Store neighbor vectors in all neighbor bonds, use this data instead of re-computing bond vectors from the query points and points.
    • Ignore E402 because flake8 doesn't like some changes made by isort; I prefer isort so I'm ignoring the flake8 warning.

    There is an API break worth mentioning here: NeighborList.from_arrays can no longer accept distances, and instead requires vectors. (The distances are computed internally to match the vectors.) I could not think of a way to avoid this API break.

    Motivation and Context

    In this PR, I re-work the NeighborBond and related classes to store the actual vector for each bond. This vector can be directly re-used in many analysis methods, avoiding the need to perform computing box.wrap(points[point_index] - query_points[query_point_index]). I have benchmarks below. This should help improve correctness in a few edge cases like the Voronoi case I mentioned above, and should make it easier if we ever choose to let freud find neighbors beyond the nearest periodic image. I also caught a bug in the tests because of this change.

    How Has This Been Tested?

    Existing tests pass with minor tweaks.

    Performance:

    • RDF is basically unchanged
    • BondOrder is ~10% faster
    • PMFT is ~10% slower

    Overall I'm not concerned about the performance impact of this PR.

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds or improves functionality)
    • [x] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Documentation improvement (updates to user guides, docstrings, or developer docs)

    Checklist:

    • [x] I have read the CONTRIBUTING document.
    • [x] My code follows the code style of this project.
    • [x] I have updated the documentation (if relevant).
    • [x] I have added tests that cover my changes (if relevant).
    • [x] All new and existing tests passed.
    • [ ] I have updated the credits.
    • [ ] I have updated the Changelog.
    locality voronoi 
    opened by bdice 21
  • RDF.__init__ SEGFAULT

    RDF.__init__ SEGFAULT

    Original report by Carl Simon Adorf (Bitbucket: csadorf, GitHub: csadorf).


    I'm currently porting my scripts to python3.4 on collins when I encountered this bug.

    The bug occurs when I try to calculate the RDF from a previously read XMLDCDTrajectory.

    • freud version: c02760af62e9482d58222deb158770ee05ce3368
    • hoomd version: HOOMD-blue v1.0.1 CUDA DOUBLE MPI SSE AVX
    • python: 3.4.1
    #!python
    
    Boost.Python.ArgumentError: Python argument types in
        RDF.__init__(RDF, Box, float, float)
    did not match C++ signature:
        __init__(_object*, float, float)
    [collins:25249] *** Process received signal ***
    [collins:25249] Signal: Segmentation fault (11)
    [collins:25249] Signal code: Address not mapped (1)
    [collins:25249] Failing at address: 0xa0
    [collins:25249] [ 0] /lib64/libpthread.so.0(+0x10e50) [0x7f6d427f4e50]
    [collins:25249] [ 1] /usr/lib64/libpython3.4.so.1.0(+0x9bc38) [0x7f6d43c2cc38]
    [collins:25249] [ 2] /usr/lib64/libpython3.4.so.1.0(+0xa5897) [0x7f6d43c36897]
    [collins:25249] [ 3] /usr/lib64/libpython3.4.so.1.0(+0xa5297) [0x7f6d43c36297]
    [collins:25249] [ 4] /lib64/libc.so.6(__cxa_finalize+0x97) [0x7f6d3de4b327]
    [collins:25249] [ 5] /usr/lib64/libboost_python-3.4.so.1.55.0(+0x17743) [0x7f6d42e8a743]
    [collins:25249] *** End of error message ***
    Segmentation fault
    
    bug 
    opened by bdice 19
  • update cmake to only use TBB target

    update cmake to only use TBB target

    Description

    Freud's CMake used the TBB_INCLUDE_DIR and TBB_LIBRARY variables, instead of just linking to the TBB build target, which caused build issues on #866 . This PR makes changes to fix the build issues on certain systems and use better modern CMake style.

    Motivation and Context

    Resolves: #866

    How Has This Been Tested?

    The CI will test the new CMake code on many different systems, and I will verify that the system configuration referenced in #866 builds freud correctly.

    Types of changes

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds or improves functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Documentation improvement (updates to user guides, docstrings, or developer docs)

    Checklist:

    • [ ] I have read the CONTRIBUTING document.
    • [ ] My code follows the code style of this project.
    • [ ] I have updated the documentation (if relevant).
    • [ ] I have added tests that cover my changes (if relevant).
    • [ ] All new and existing tests passed.
    • [ ] I have updated the credits.
    • [ ] I have updated the Changelog.
    bug building & installation 
    opened by tommy-waltmann 18
  • Add inplace argument

    Add inplace argument

    Inplace argument added in box.wrap

    Description

    Added inplace argument in both box.wrap and util._convert_array

    Motivation and Context

    Resolves: Reduce copying of input data and add an option to operate on the data directly.

    How Has This Been Tested?

    Added some unit tests in test_box_Box.py and test_util.py, and passed these tests.

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds or improves functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Documentation improvement (updates to user guides, docstrings, or developer docs)

    Checklist:

    • [x] I have read the CONTRIBUTING document.
    • [x] My code follows the code style of this project.
    • [x] I have updated the documentation (if relevant).
    • [x] I have added tests that cover my changes (if relevant).
    • [x] All new and existing tests passed.
    • [x] I have updated the credits.
    • [x] I have updated the Changelog.
    opened by Charlottez112 18
  • Feature/enable skbuild

    Feature/enable skbuild

    Description

    Use scikit-build and CMake to build the C++ and Cython components of freud.

    Motivation and Context

    This pull request dramatically speeds up builds of freud (takes < 30 seconds on my Mac using Ninja). Changing to a proper build system should also dramatically simplify addressing issues like #464 and #629. It will also enable easier integration with IDEs for the C++ code, and long term it will enable exposing a C++ API for freud.

    Outstanding tasks:

    • [x] Enable automatic download of submodules.
    • [x] Contact voro++ maintainer to discuss modifications that would allow using it as a library (rather than having to handle compiling its sources ourselves).
    • [x] Update CI scripts to work with new build system.
    • [x] Update and test deployment (to Test PyPI).
    • [x] Update development requirements.
    • [x] Update documentation on building freud.

    Update -- The comments below are mostly outdated now due to later changes to the CMake configuration, but I'll leave them here so that we can track the discussions in the future if needed.

    This PR supersedes #661. I glanced through that PR, and I note that this implementation is different: rather than compiling a single shared object for the entire C++ library, I am instead following a more distributed model where each Cython module is compiled against the corresponding C++ code. We can change this in the future if we move towards a more thorough separation of the C++ code and exposing those APIs, but for now I'd prefer to retain this model since it's cleaner and generates leaner builds.

    @joaander as a more experienced CMake user, some specific questions for you (in addition to any review you can provide).

    • What is a reasonable minimum version of CMake to use? I've looked through the changelog etc briefly, but I'd prefer a more informed opinion.
    • If you look at freud/CMakelists.txt, I'm forced to link the _util Object library directly to a couple of Cython modules rather than to the corresponding C++ Object libraries (i.e. _environment instead of environment). I think that the reason is that the relevant C++ util code (the diagonalize.cc source file) is only included by the source files of the corresponding C++ modules, not the headers. As a result, I've marked the corresponding target includes (for instance, in cpp/environment/CMakeLists.txt) as PRIVATE; however, I note that changing this to PUBLIC doesn't fix the issue, so I'm not certain this is correct. It may also be some limitation with Object library includes propagation that I'm not finding in the documentation. Any thoughts?
    • I wrote a simple FindTBB.cmake script, but I know that we discussed the more sophisticated version along with the config file that you have on HOOMD's next branch. Would you recommend I try to copy in that file and the corresponding macro?

    How Has This Been Tested?

    Existing builds on my machine are functional. Additional testing will come in the form of the to-do list above.

    Update - Building and testing on CI now works on all systems. The wheel building process has also been manually verified via push to test PyPI.

    Screenshots

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds or improves functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Documentation improvement (updates to user guides, docstrings, or developer docs)

    Checklist:

    • [x] I have read the CONTRIBUTING document.
    • [x] My code follows the code style of this project.
    • [x] I have updated the documentation (if relevant).
    • [x] I have added tests that cover my changes (if relevant).
    • [x] All new and existing tests passed.
    • [x] I have updated the credits.
    • [x] I have updated the Changelog.
    enhancement building & installation 
    opened by vyasr 18
  • Hexatic normalization in docs doesn't match implementation

    Hexatic normalization in docs doesn't match implementation

    @Plastikschuessel noted that the documentation for the Hexatic order parameter doesn't match the implementation. The docs say that the formula normalizes by 1/n, but the implemented code normalizes by 1/k. In many systems (particularly hexagonal and square crystals), k=n, but in some cases (e.g. 2D quasicrystals), the order is 12-fold even though any given particle has fewer than 12 neighbors.

    https://github.com/glotzerlab/freud/blob/3f44951a7eb8d6670664fc1dac22c451f17278a1/freud/order.pyx#L250-L251

    https://github.com/glotzerlab/freud/blob/3f44951a7eb8d6670664fc1dac22c451f17278a1/cpp/order/HexaticTranslational.cc#L38-L45

    A temporary workaround to achieve the desired behavior is to enabled weighted=True in the constructor. In ball queries and nearest neighbor queries (but not Voronoi queries), the weight for each bond defaults to 1. If weighted=True, the normalization will divide by the number of neighbor bonds (a sum with 1 for each bond).

    To resolve this issue, we should verify the literature to see which convention is more common and consider changing to normalization by 1/n instead of 1/k.

    bug order documentation 
    opened by bdice 17
  • 3D Voronoi Diagram : convex hull instead of cubic box ?

    3D Voronoi Diagram : convex hull instead of cubic box ?

    Hi,

    I'm wondering if it's feasible to bound the 3D space by a dilatation of the convex hull of a set of points instead of a cube?

    Indeed, for my problem I have points at the top and bottom surface which are not aligned on two planes. This creates quite big voronoi cells at the top and/or bottom (see an example in 2D bellow)

    image

    Also I noticed on the example above that cells at the top have a high number of sides, does that mean that they are considered to be neighbors to a lof of other top cells, even if not shown in the plot?

    Thanks a lot for your help and congratulations on the repository!

    opened by Optimox 16
  • Static structure factor S(q)

    Static structure factor S(q)

    Description

    A commonly desired quantity from simulations is S(q), the static structure factor. The recently introduced diffraction module (#596) is an ideal location for this feature. In my understanding, the structure factor can be calculated two different ways: directly (which is expensive, O(N^2) for a system of N particles), or indirectly via a Fourier transform of a radial distribution function. I have heard there is significant controversy over the choice of method and the regimes in which each is correct. I hope to offer both methods in this pull request, as well as some clarity in the documentation for when each might be (in)appropriate for use.

    The second thing to outline is the scope of this pull request. This is a first-pass, and will only support systems with a single species, and only one set of particle positions (i.e. points == query_points). This PR is intentionally held to a very narrow scope, to reduce the complexity of the initial implementation and to solidify testing requirements for the "base case" upon which further features may someday be added, such as:

    • multi-species structure factors (via points and query_points as well as hints on how to normalize the result correctly)
    • accumulation over multiple frames (using freud's standard reset=False approach -- but may require additional normalizations!)
    • particle form factors (e.g. via a secondary array of values)
    • bonded contributions e.g. from polymers (I think there might be another term for this, but not sure)

    Motivation and Context

    Discussed with @ramanishsingh and also desired for my own research.

    The reason to implement this feature in freud (rather than point users to another existing package) is that it complements and can leverage the fast neighbor-finding and other features of freud, the feature itself can be implemented in parallelized C++, and it fits in the scope of colloidal-scale simulation analysis that freud emphasizes.

    Resolves: #652

    TODO (help welcome)

    • [x] Consolidate implementation code from Cython into C++
    • [x] Validate results approximately match against a known reference
    • [ ] Validate FFT-RDF method against direct method for q values greater than 4pi/L (or something like that) where they should agree
    • [ ] Write docs and seek expert knowledge (literature) about when each method is valid
    • [ ] Write tests to ensure behavior is not broken if/when new features are added

    How Has This Been Tested?

    I plan to compare this code against a few existing implementations to verify its accuracy.

    Validate against:

    • https://github.com/mattwthompson/scattering/
    • other implementations from Glotzer group members

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds or improves functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Documentation improvement (updates to user guides, docstrings, or developer docs)

    Checklist:

    • [x] I have read the CONTRIBUTING document.
    • [x] My code follows the code style of this project.
    • [x] I have updated the documentation (if relevant).
    • [x] I have added tests that cover my changes (if relevant).
    • [x] All new and existing tests passed.
    • [x] I have updated the credits.
    • [x] I have updated the Changelog.
    enhancement diffraction 
    opened by bdice 16
  • Bump numpy from 1.23.5 to 1.24.1

    Bump numpy from 1.23.5 to 1.24.1

    Bumps numpy from 1.23.5 to 1.24.1.

    Release notes

    Sourced from numpy's releases.

    v1.24.1

    NumPy 1.24.1 Release Notes

    NumPy 1.24.1 is a maintenance release that fixes bugs and regressions discovered after the 1.24.0 release. The Python versions supported by this release are 3.8-3.11.

    Contributors

    A total of 12 people contributed to this release. People with a "+" by their names contributed a patch for the first time.

    • Andrew Nelson
    • Ben Greiner +
    • Charles Harris
    • Clément Robert
    • Matteo Raso
    • Matti Picus
    • Melissa Weber Mendonça
    • Miles Cranmer
    • Ralf Gommers
    • Rohit Goswami
    • Sayed Adel
    • Sebastian Berg

    Pull requests merged

    A total of 18 pull requests were merged for this release.

    • #22820: BLD: add workaround in setup.py for newer setuptools
    • #22830: BLD: CIRRUS_TAG redux
    • #22831: DOC: fix a couple typos in 1.23 notes
    • #22832: BUG: Fix refcounting errors found using pytest-leaks
    • #22834: BUG, SIMD: Fix invalid value encountered in several ufuncs
    • #22837: TST: ignore more np.distutils.log imports
    • #22839: BUG: Do not use getdata() in np.ma.masked_invalid
    • #22847: BUG: Ensure correct behavior for rows ending in delimiter in...
    • #22848: BUG, SIMD: Fix the bitmask of the boolean comparison
    • #22857: BLD: Help raspian arm + clang 13 about __builtin_mul_overflow
    • #22858: API: Ensure a full mask is returned for masked_invalid
    • #22866: BUG: Polynomials now copy properly (#22669)
    • #22867: BUG, SIMD: Fix memory overlap in ufunc comparison loops
    • #22868: BUG: Fortify string casts against floating point warnings
    • #22875: TST: Ignore nan-warnings in randomized out tests
    • #22883: MAINT: restore npymath implementations needed for freebsd
    • #22884: BUG: Fix integer overflow in in1d for mixed integer dtypes #22877
    • #22887: BUG: Use whole file for encoding checks with charset_normalizer.

    Checksums

    ... (truncated)

    Commits
    • a28f4f2 Merge pull request #22888 from charris/prepare-1.24.1-release
    • f8fea39 REL: Prepare for the NumPY 1.24.1 release.
    • 6f491e0 Merge pull request #22887 from charris/backport-22872
    • 48f5fe4 BUG: Use whole file for encoding checks with charset_normalizer [f2py] (#22...
    • 0f3484a Merge pull request #22883 from charris/backport-22882
    • 002c60d Merge pull request #22884 from charris/backport-22878
    • 38ef9ce BUG: Fix integer overflow in in1d for mixed integer dtypes #22877 (#22878)
    • bb00c68 MAINT: restore npymath implementations needed for freebsd
    • 64e09c3 Merge pull request #22875 from charris/backport-22869
    • dc7bac6 TST: Ignore nan-warnings in randomized out tests
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies python 
    opened by dependabot[bot] 0
  • NeighborList Filters and SANN

    NeighborList Filters and SANN

    Description

    This PR adds a new concept in freud, the NeighborList filter, and the SANN neighbor finding method implemented as a neighborlist filter. The SANN method finds neighbors based on solid angle occupied by each neighbor up to a total of 4pi. For more information, see https://aip.scitation.org/doi/10.1063/1.4729313

    I have also added a method for sorting a NeighborList by either distance or point_index to the python API. It was needed for the SANN method implementation and I think it has general utility as well.

    Motivation and Context

    This is a useful way of finding neighbors, which was not previously available in freud.

    How Has This Been Tested?

    Tests have been added in test_locality_Filter.py. I would like to see some validation of the calculation on a real system before this PR can be merged.

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds or improves functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Documentation improvement (updates to user guides, docstrings, or developer docs)

    Checklist:

    • [ ] I have read the CONTRIBUTING document.
    • [ ] My code follows the code style of this project.
    • [ ] I have updated the documentation (if relevant).
    • [ ] I have added tests that cover my changes (if relevant).
    • [ ] All new and existing tests passed.
    • [ ] I have updated the credits.
    • [ ] I have updated the Changelog.
    opened by tommy-waltmann 3
  • NEW feature: intermediate scattering function

    NEW feature: intermediate scattering function

    Description

    Please assist me in improving the code @tommy-waltmann To calculate time-dependent intermediate scattering function. Here are the refs:

    1. https://en.wikipedia.org/wiki/Dynamic_structure_factor
    2. https://www.lehigh.edu/imi/teched/AtModel/Lecture_11_Micoulaut_Atomistics_Glass_Course.pdf

    Motivation and Context

    Here we add a class derived from StaticStructureFactorDirect to calculate the time-dependent intermediate scattering function. Resolves: #1040

    How Has This Been Tested?

    This code has not been tested before complete

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ x] New feature (non-breaking change which adds or improves functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Documentation improvement (updates to user guides, docstrings, or developer docs)

    Checklist:

    • [ x] I have read the CONTRIBUTING document.
    • [x ] My code follows the code style of this project.
    • [ ] I have updated the documentation (if relevant).
    • [ ] I have added tests that cover my changes (if relevant).
    • [ ] All new and existing tests passed.
    • [ ] I have updated the credits.
    • [ ] I have updated the Changelog.
    opened by Roy-Kid 8
  • A Request for Intermediate Scattering function

    A Request for Intermediate Scattering function

    Description

    The intermediate scattering function is defined as the Fourier transform of the Van Hove function: image Instead of Fourier transform, these functions can also be directly computed from the atomic trajectories: image Fs and Fd are self and distinct parts.

    Is there any plan to support this function? Otherwise, I can write one following structure factor code and MSD part.

    Proposed Solution

    isf = freud.Scattering.Intermedidate(k_space, ...)
    # points = (N_frames, N_particles, 3)
    # points = (N_frames, M_particles, 3)
    isf.compute((box, points)).query(query_points)
    isf.self_part
    isf.distinct_part
    

    Additional Context

    Reference: https://www.lehigh.edu/imi/teched/AtModel/Lecture_11_Micoulaut_Atomistics_Glass_Course.pdf

    Developer

    Would someone else please implement this?

    enhancement 
    opened by Roy-Kid 4
  • Remove global search flag

    Remove global search flag

    Description

    This PR removes the global_search flag in EnvironmentCluster, in favor of having users just give a neighborlist where every particle is a neighbor of every other particle.

    Motivation and Context

    This PR makes the API more intuitive and less confusing.

    Resolves: #984

    How Has This Been Tested?

    I have converted a previous test that used the global_search=True option.

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds or improves functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Documentation improvement (updates to user guides, docstrings, or developer docs)

    Checklist:

    • [ ] I have read the CONTRIBUTING document.
    • [ ] My code follows the code style of this project.
    • [ ] I have updated the documentation (if relevant).
    • [ ] I have added tests that cover my changes (if relevant).
    • [ ] All new and existing tests passed.
    • [ ] I have updated the credits.
    • [ ] I have updated the Changelog.
    opened by tommy-waltmann 3
  • Clarifying local descriptors in documentation

    Clarifying local descriptors in documentation

    Description

    The environment.LocalDescriptors documentation doesn't really give a description of what the local descriptors are, how they are calculated, or even a reference that explains what they are. As far as I can tell, one would have to actually look through the source code to find out what this class actually calculates. In a previous version of the documentation there was this example but it is no longer in the 'stable' branch. The documentation for this class should be more explicit about what it calculates and what its properties are.

    Motivation and Context

    As an end user, it is hard to know whether and how to use classes like this without more information about what they are calculating.

    task 
    opened by scmartin 2
Releases(v2.12.1)
  • v2.12.1(Dec 5, 2022)

    v2.12.1 -- 2022-12-05

    This release adds support for python 3.11 and a small bug fix.

    Added

    • Support for Python 3.11.

    Fixed

    • n(r) property in freud.density.RDF is now properly normalized by the number of query points.
    Source code(tar.gz)
    Source code(zip)
  • v2.12.0(Nov 9, 2022)

    v2.12.0 -- 2022-11-09

    This releases adds the following features and compatibility changes:

    Added

    • Mass dependence in freud.cluster.ClusterProperties.
    • Inertia tensor calculation in freud.cluster.ClusterProperties.

    Fixed

    • Compatibility with new namespace for MDAnalysis.coordinates.timestep.Timestep.
    Source code(tar.gz)
    Source code(zip)
  • v2.11.0(Aug 9, 2022)

    v2.11.0 -- 2022-08-9

    This release adds documentation improvements in a few modules, as well as the following changes:

    Added

    • Support for 2D systems in freud.diffraction.StaticStructureFactorDebye.
    • Compilation uses the C++17 standard.

    Fixed

    • EnvironmentMotifMatch correctly handles NeighborLists with more neighbors per particle than the motif.
    Source code(tar.gz)
    Source code(zip)
  • v2.10.0(May 18, 2022)

    v2.10.0 -- 2022-05-18

    This release adds macOS-arm64 builds on PyPI and conda-forge, as well as the following changes:

    Added

    • include_input_points argument to freud.locality.PeriodicBuffer.
    • macos-arm64 binary builds on conda-forge and PyPI.

    Changed

    • freud.data.UnitCell.generate_system now generates positions in the same order as the basis positions.
    Source code(tar.gz)
    Source code(zip)
  • v2.9.0(Apr 19, 2022)

    This release removes cython as an install requirement, more accurately names some properties in freud.diffraction.StaticStructureFactorDebye, among the other updates listed below.

    Added

    • (breaking) Some freud.diffraction.StaticStructureFactorDebye property names changed to be more descriptive.
    • freud.diffraction.DiffractionPattern now raises an exception when used with non-cubic boxes.

    Fixed

    • freud.diffraction.StaticStructureFactorDebye implementation now gives S_k[0] = N.
    • Cython is no longer listed as an install requirement in setup.py.

    Removed

    • Custom CMake build type ReleaseWithDocs.
    Source code(tar.gz)
    Source code(zip)
  • v2.8.0(Jan 25, 2022)

    This release includes a new method for computing the static structure factor, python 3.10 support, and other small changes listed below.

    Added

    • freud.diffraction.StaticStructureFactorDirect class (unstable) can be used to compute the static structure factor S(k) by sampling reciprocal space vectors.
    • Python 3.10 is supported.
    • Documentation examples are tested with pytest.
    • Use clang-format as pre-commit hook.
    • Add related tools section to the documentation.

    Fixed

    • freud.diffraction.DiffractionPattern normalization changed such that S(k=0) = N.
    • Added error checking for r_min, r_max arguments in freud.density.RDF, freud.locality.NeighborList, freud.locality.NeighborQuery, and freud.density.LocalDensity classes.
    • CMake build system only uses references to TBB target.

    Changed

    • Re-organized tests for the static structure factor classes.
    • Move util::Histogram<T>::Axes to util::Axes.
    • Use new flake8 plugin flake8-force for linting Cython code.
    Source code(tar.gz)
    Source code(zip)
  • v2.7.0(Oct 1, 2021)

    This release includes a new static structure factor calculation, as well as performance improvements that were unintentionally introduced in an earlier version.

    Added

    • freud.diffraction.StaticStructureFactorDebye class (unstable) can be used to compute the static structure factor S(k) using the Debye formula.

    Fixed

    • Updated lambda functions to capture this by reference, to ensure compatibility with C++20 and above.
    • Fixed Box.contains to run in linear time, O(num_points).
    • Fixed compilation to pass compiler optimization flags when build type is ReleaseWithDocs (major perf regression since 2.4.1).
    Source code(tar.gz)
    Source code(zip)
  • v2.6.2(Jun 26, 2021)

    This patch release fixes an error in the RPATH of Linux wheels. See #803 for details.

    Fixed

    • Upgrade to auditwheel 4.0.0 in cibuildwheel to ensure RPATH is patched properly for libfreud.so in Linux wheels.
    Source code(tar.gz)
    Source code(zip)
  • v2.6.1(Jun 23, 2021)

    This patch release fixes the source distribution on PyPI, which did not include git submodules due to changes in CI.

    Fixed

    • Added missing git submodules to source distribution.
    Source code(tar.gz)
    Source code(zip)
  • v2.6.0(Jun 22, 2021)

    Version 2.6.0 has multiple fixes and improvements/fixes to the Steinhardt and DiffractionPattern class. We also introduce various Box methods that allow for inplace modification of arrays.

    Added

    • Added out option for the wrap, unwrap, make_absolute, and make_fractional methods of Box.
    • The Steinhardt and SolidLiquid classes expose the raw qlmi arrays.
    • The Steinhardt class supports computing order parameters for multiple l.

    Changed

    • Improvements to plotting for the DiffractionPattern.
    • Wheels are now built with cibuildwheel.

    Fixed

    • Fixed/Improved the k values and vectors in the DiffractionPattern (more improvement needed).
    • Fixed incorrect computation of Steinhardt averaged quantities. Affects all previous versions of freud 2.
    • Fixed documented formulas for Steinhardt class.
    • Fixed broken arXiv links in bibliography.
    Source code(tar.gz)
    Source code(zip)
  • v2.5.1(Apr 8, 2021)

  • v2.5.0(Mar 24, 2021)

    v2.5.0 - 2021-03-16

    Changed

    • NeighborList filter method has been optimized.
    • TBB 2021 is now supported (removed use of deprecated TBB features).
    • Added new pre-commit hooks for black, isort, and pyupgrade.
    • Testing framework now uses pytest.
    Source code(tar.gz)
    Source code(zip)
  • v2.4.0(Nov 10, 2020)

    Note: the tarball released to PyPI was missing CMake files. The source tarball attached here matches the tarball located at http://glotzerlab.engin.umich.edu/Downloads/freud/. (The tarball generated by GitHub does not include the contents of git submodules, which are required to build.)

    Source code(tar.gz)
    Source code(zip)
    freud-v2.4.0.tar.gz(139.24 MB)
  • v2.3.0(Sep 9, 2020)

    Note: the tarball released to PyPI was missing Cython files (*.pyx, *.pxd). This is corrected in #653. The source tarball attached here matches the tarball located at http://glotzerlab.engin.umich.edu/Downloads/freud/. (The tarball generated by GitHub does not include the contents of git submodules, which are required to build.)

    Source code(tar.gz)
    Source code(zip)
    freud-v2.3.0.tar.gz(138.56 MB)
Owner
Glotzer Group
We develop molecular simulation tools to study the self-assembly of complex materials and explore matter at the nanoscale.
Glotzer Group
A pipeline that creates consensus sequences from a Nanopore reads. I

A pipeline that creates consensus sequences from a Nanopore reads. It clusters reads that are similar to each other and creates a consensus that is then identified using BLAST.

Ada Madejska 2 May 15, 2022
Average time per match by division

HW_02 Unzip matches.rar to access .json files for matches. Get an API key to access their data at: https://developer.riotgames.com/ Average time per m

11 Jan 07, 2022
A model checker for verifying properties in epistemic models

Epistemic Model Checker This is a model checker for verifying properties in epistemic models. The goal of the model checker is to check for Pluralisti

Thomas Träff 2 Dec 22, 2021
Example Of Splunk Search Query With Python And Splunk Python SDK

SSQAuto (Splunk Search Query Automation) Example Of Splunk Search Query With Python And Splunk Python SDK installation: ➜ ~ git clone https://github.c

AmirHoseinTangsiriNET 1 Nov 14, 2021
Port of dplyr and other related R packages in python, using pipda.

Unlike other similar packages in python that just mimic the piping syntax, datar follows the API designs from the original packages as much as possible, and is tested thoroughly with the cases from t

179 Dec 21, 2022
Python-based Space Physics Environment Data Analysis Software

pySPEDAS pySPEDAS is an implementation of the SPEDAS framework for Python. The Space Physics Environment Data Analysis Software (SPEDAS) framework is

SPEDAS 98 Dec 22, 2022
Creating a statistical model to predict 10 year treasury yields

Predicting 10-Year Treasury Yields Intitially, I wanted to see if the volatility in the stock market, represented by the VIX index (data source), had

10 Oct 27, 2021
Pipeline and Dataset helpers for complex algorithm evaluation.

tpcp - Tiny Pipelines for Complex Problems A generic way to build object-oriented datasets and algorithm pipelines and tools to evaluate them pip inst

Machine Learning and Data Analytics Lab FAU 3 Dec 07, 2022
BasstatPL is a package for performing different tabulations and calculations for descriptive statistics.

BasstatPL is a package for performing different tabulations and calculations for descriptive statistics. It provides: Frequency table constr

Angel Chavez 1 Oct 31, 2021
Learn machine learning the fun way, with Oracle and RedBull Racing

Red Bull Racing Analytics Hands-On Labs Introduction Are you interested in learning machine learning (ML)? How about doing this in the context of the

Oracle DevRel 55 Oct 24, 2022
Multiple Pairwise Comparisons (Post Hoc) Tests in Python

scikit-posthocs is a Python package that provides post hoc tests for pairwise multiple comparisons that are usually performed in statistical data anal

Maksim Terpilowski 264 Dec 30, 2022
Data Science Environment Setup in single line

datascienv is package that helps your to setup your environment in single line of code with all dependency and it is also include pyforest that provide single line of import all required ml libraries

Ashish Patel 55 Dec 16, 2022
vartests is a Python library to perform some statistic tests to evaluate Value at Risk (VaR) Models

gg I wasn't satisfied with any of the other available Gemini clients, so I wrote my own. Requires Python 3.9 (maybe older, I haven't checked) and opti

RAFAEL RODRIGUES 5 Jan 03, 2023
PySpark bindings for H3, a hierarchical hexagonal geospatial indexing system

h3-pyspark: Uber's H3 Hexagonal Hierarchical Geospatial Indexing System in PySpark PySpark bindings for the H3 core library. For available functions,

Kevin Schaich 12 Dec 24, 2022
An extension to pandas dataframes describe function.

pandas_summary An extension to pandas dataframes describe function. The module contains DataFrameSummary object that extend describe() with: propertie

Mourad 450 Dec 30, 2022
Pandas and Spark DataFrame comparison for humans

DataComPy DataComPy is a package to compare two Pandas DataFrames. Originally started to be something of a replacement for SAS's PROC COMPARE for Pand

Capital One 259 Dec 24, 2022
An easy-to-use feature store

A feature store is a data storage system for data science and machine-learning. It can store raw data and also transformed features, which can be fed straight into an ML model or training script.

ByteHub AI 48 Dec 09, 2022
Aggregating gridded data (xarray) to polygons

A package to aggregate gridded data in xarray to polygons in geopandas using area-weighting from the relative area overlaps between pixels and polygons. Check out the binder link above for a sample c

Kevin Schwarzwald 42 Nov 09, 2022
Desafio 1 ~ Bantotal

Challenge 01 | Bantotal Please read the instructions for the challenge by selecting your preferred language below: Español Português License Copyright

Maratona Behind the Code 44 Sep 28, 2022
songplays datamart provide details about the musical taste of our customers and can help us to improve our recomendation system

Songplays User activity datamart The following document describes the model used to build the songplays datamart table and the respective ETL process.

Leandro Kellermann de Oliveira 1 Jul 13, 2021