freud documentation

DOI PyPI Anaconda-Server ReadTheDocs Binder Codecov GitHub-Stars

Overview

The freud Python library provides a simple, flexible, powerful set of tools for analyzing trajectories obtained from molecular dynamics or Monte Carlo simulations. High performance, parallelized C++ is used to compute standard tools such as radial distribution functions, correlation functions, and clusters, as well as original analysis methods including potentials of mean force and torque (PMFTs) and local environment matching. The freud library uses single-precision NumPy arrays for input and output, enabling integration with the scientific Python ecosystem for many typical materials science workflows.

Installation

Installing freud

The freud library can be installed via conda or pip, or compiled from source.

Install via conda

The code below will install freud from conda-forge.

conda install -c conda-forge freud
Install via pip

The code below will install freud from PyPI.

pip install freud-analysis
Compile from source

The following are required for installing freud:

The following are optional for installing freud:

  • Cython (0.28+ required): The freud repository contains Cython-generated *.cpp files in the freud/ directory that can be used directly. However, Cython is necessary if you wish to recompile these files.

For conda users, these requirements can be met by installing the following packages from the conda-forge channel:

conda install -c conda-forge tbb tbb-devel numpy cython

The code that follows builds freud and installs it for all users (append –user if you wish to install it to your user site directory):

git clone --recurse-submodules https://github.com/glotzerlab/freud.git
cd freud
python setup.py install

You can also build freud in place so that you can run from within the folder:

# Run tests from the tests directory
python setup.py build_ext --inplace

Building freud in place has certain advantages, since it does not affect your Python behavior except within the freud directory itself (where freud can be imported after building). Additionally, due to limitations inherent to the distutils/setuptools infrastructure, building extension modules can only be parallelized using the build_ext subcommand of setup.py, not with install. As a result, it will be faster to manually run build_ext and then install (which normally calls build_ext under the hood anyway) the built packages. In general, the following options are available for setup.py in addition to the standard setuptools options (notes are included to indicate which options are only available for specific subcommands such as build_ext):

--PRINT-WARNINGS

Specify whether or not to print compilation warnings resulting from the build even if the build succeeds with no errors.

--ENABLE-CYTHON

Rebuild the Cython-generated C++ files. If there are any unexpected issues with compiling the C++ shipped with the build, using this flag may help. It is also necessary any time modifications are made to the Cython files.

-j

Compile in parallel. This affects both the generation of C++ files from Cython files and the subsequent compilation of the source files. In the latter case, this option controls the number of Python modules that will be compiled in parallel.

--TBB-ROOT

The root directory where TBB is installed. Useful if TBB is installed in a non-standard location or cannot be located by Python for some other reason. Note that this information can also be provided using the environment variable TBB_ROOT. The options –TBB-INCLUDE and –TBB-LINK will take precedence over –TBB-ROOT if both are specified.

--TBB-INCLUDE

The directory where the TBB headers (e.g. tbb.h) are located. Useful if TBB is installed in a non-standard location or cannot be located by Python for some other reason. Note that this information can also be provided using the environment variable TBB_ROOT. The options –TBB-INCLUDE and –TBB-LINK will take precedence over –TBB-ROOT if both are specified.

The directory where the TBB shared library (e.g. libtbb.so or libtbb.dylib) is located. Useful if TBB is installed in a non-standard location or cannot be located by Python for some other reason. Note that this information can also be provided using the environment variable TBB_ROOT. The options –TBB-INCLUDE and –TBB-LINK will take precedence over –TBB-ROOT if both are specified.

The following additional arguments are primarily useful for developers:

--COVERAGE

Build the Cython files with coveragerc support to check unit test coverage.

--NTHREAD

Specify the number of threads to allocate to compiling each module. This option is primarily useful for rapid development, particularly when all changes are in one module. While the -j option will not help parallelize this case, this option allows compilation of multiple source files belonging to the same module in parallel.

Note

freud makes use of submodules. If you ever wish to manually update these, you can execute:

git submodule update --init

Unit Tests

The unit tests for freud are included in the repository and are configured to be run using the Python unittest library:

# Run tests from the tests directory
cd tests
python -m unittest discover .

Note that because freud is designed to require installation to run (i.e. it cannot be run directly out of the build directory), importing freud from the root of the repository will fail because it will try and import the package folder. As a result, unit tests must be run from outside the root directory if you wish to test the installed version of freud. If you want to run tests within the root directory, you can instead build freud in place:

# Run tests from the tests directory
python setup.py build_ext --inplace

This build will place the necessary files alongside the freud source files so that freud can be imported from the root of the repository.

Documentation

The documentation for freud is hosted online at ReadTheDocs, but you may also build the documentation yourself:

Building the documentation

The following are required for building freud documentation:

You can install sphinx using conda

conda install sphinx

or from PyPi

pip install sphinx

To build the documentation, run the following commands in the source directory:

cd doc
make html
# Then open build/html/index.html

To build a PDF of the documentation (requires LaTeX and/or PDFLaTeX):

cd doc
make latexpdf
# Then open build/latex/freud.pdf

Examples

Examples are provided as Jupyter notebooks in a separate freud-examples repository. These notebooks may be launched interactively on Binder or downloaded and run on your own system. Visualization of data is done via Matplotlib [Matplotlib] and Bokeh [Bokeh], unless otherwise noted.

Key concepts

There are a few critical concepts, algorithms, and data structures that are central to all of freud. The box module defines the concept of a periodic simulation box, and the locality module defines methods for finding nearest neighbors for particles. Since both of these are used throughout freud, we recommend familiarizing yourself with these first, before delving into the workings of specific freud analysis modules.

Box

The goal of freud is to perform generic analyses of particle simulations. Such simulations are always conducted within some region representing physical space; in freud, these regions are known as simulation boxes, or simply boxes. An important characteristic of many simulations is that the simulation box is periodic, i.e. particles can travel and interact across system boundaries (for more information, see the Wikipedia page). Simulations frequently use periodic boundary conditions to effectively simulate infinite systems without actually having to include an infinite number of particles. In such systems, a box in N dimensions can be represented by N linearly independent vectors.

The Box class provides the standard API for such simulation boxes throughout freud. The class represents some 2- or 3-dimensional region of space, and it provides utility functions for interacting with this space, including the ability to wrap vectors outside this box into the box according to periodic boundary conditions. Boxes are represented according to the HOOMD-blue convention for boxes. According to this convention, a 3D (2D) simulation box is fully defined by 3 (2) linearly independent vectors, which are represented by 3 (2) characteristic lengths and 3 (1) tilt factors indicating how these vectors are angled with respect to one another. With this convention, a generic box is represented by the following \(3\times3\) matrix:

\[\begin{split}\left( \begin{array}{ccc} L_x & xy \times L_x & xz \times L_z\\ 0 & L_y & yz \times L_z\\ 0 & 0 & L_z\\ \end{array} \right)\end{split}\]

where \(xy\), \(xz\), and \(yz\) are the tilt factors. Note that this convention imposes the requirement that the box vectors form a right-handed coordinate system, which manifests itself in the form of an upper (rather than lower) triangular box matrix.

In this notebook, we demonstrate the basic features of the Box class, particularly the facility for wrapping particles back into the box under periodic boundary conditions. For more information, see the freud.box documentation.

Box Creation

There are many ways to construct a box. We demonstrate all of these below, with some discussion of when they might be useful.

Default (full) API

Boxes may be constructed explicitly using all arguments. Such construction is useful when performing ad hoc analyses involving custom boxes. In general, boxes are assumed to be 3D and orthorhombic unless otherwise specified.

[1]:
import freud.box

# All of the below examples are valid boxes.
box = freud.box.Box(Lx=5, Ly=6, Lz=7, xy=0.5, xz=0.6, yz=0.7, is2D=False)
box = freud.box.Box(1, 3, 2, 0.3, 0.9)
box = freud.box.Box(5, 6, 7)
box = freud.box.Box(5, 6, is2D=True)
box = freud.box.Box(5, 6, xy=0.5, is2D=True)
From a box object

The simplest case is simply constructing one freud box from another.

Note that all forms of creating boxes aside from the explicit method above use methods defined within the Box class rather than attempting to overload the constructor itself.

[2]:
box = freud.box.Box(1, 2, 3)
box2 = freud.box.Box.from_box(box)
print("The original box: \n\t{}".format(box))
print("The copied box: \n\t{}\n".format(box2))

# Boxes are always copied by value, not by reference
box.Lx = 5
print("The original box is modified: \n\t{}".format(box))
print("The copied box is not: \n\t{}\n".format(box2))

# Note, however, that box assignment creates a new object that
# still points to the original box object, so modifications to
# one are visible on the other.
box3 = box2
print("The new copy: \n\t{}".format(box3))
box2.Lx = 2
print("The new copy after the original is modified: \n\t{}".format(box3))
print("The modified original box: \n\t{}".format(box2))
The original box:
        freud.box.Box(Lx=1.0, Ly=2.0, Lz=3.0, xy=0.0, xz=0.0, yz=0.0, is2D=False)
The copied box:
        freud.box.Box(Lx=1.0, Ly=2.0, Lz=3.0, xy=0.0, xz=0.0, yz=0.0, is2D=False)

The original box is modified:
        freud.box.Box(Lx=5.0, Ly=2.0, Lz=3.0, xy=0.0, xz=0.0, yz=0.0, is2D=False)
The copied box is not:
        freud.box.Box(Lx=1.0, Ly=2.0, Lz=3.0, xy=0.0, xz=0.0, yz=0.0, is2D=False)

The new copy:
        freud.box.Box(Lx=1.0, Ly=2.0, Lz=3.0, xy=0.0, xz=0.0, yz=0.0, is2D=False)
The new copy after the original is modified:
        freud.box.Box(Lx=2.0, Ly=2.0, Lz=3.0, xy=0.0, xz=0.0, yz=0.0, is2D=False)
The modified original box:
        freud.box.Box(Lx=2.0, Ly=2.0, Lz=3.0, xy=0.0, xz=0.0, yz=0.0, is2D=False)
From a matrix

A box can be constructed directly from the box matrix representation described above using the Box.from_matrix method.

[3]:
# Matrix representation. Note that the box vectors must represent
# a right-handed coordinate system! This translates to requiring
# that the matrix be upper triangular.
box = freud.box.Box.from_matrix([[1, 1, 0], [0, 1, 0.5], [0, 0, 0.5]])
print("This is a 3D box from a matrix: \n\t{}\n".format(box))

# 2D box
box = freud.box.Box.from_matrix([[1, 0, 0], [0, 1, 0], [0, 0, 0]])
print("This is a 2D box from a matrix: \n\t{}\n".format(box))

# Automatic matrix detection using from_box
box = freud.box.Box.from_box([[1, 1, 0], [0, 1, 0.5], [0, 0, 0.5]])
print("The box matrix was automatically detected: \n\t{}\n".format(box))

# Boxes can be numpy arrays as well
import numpy as np
box = freud.box.Box.from_box(np.array([[1, 1, 0], [0, 1, 0.5], [0, 0, 0.5]]))
print("Using a 3x3 numpy array: \n\t{}".format(box))
This is a 3D box from a matrix:
        freud.box.Box(Lx=1.0, Ly=1.0, Lz=0.5, xy=1.0, xz=0.0, yz=1.0, is2D=False)

This is a 2D box from a matrix:
        freud.box.Box(Lx=1.0, Ly=1.0, Lz=0.0, xy=0.0, xz=0.0, yz=0.0, is2D=True)

The box matrix was automatically detected:
        freud.box.Box(Lx=1.0, Ly=1.0, Lz=0.5, xy=1.0, xz=0.0, yz=1.0, is2D=False)

Using a 3x3 numpy array:
        freud.box.Box(Lx=1.0, Ly=1.0, Lz=0.5, xy=1.0, xz=0.0, yz=1.0, is2D=False)
From a namedtuple or dict

A box can be also be constructed from a namedtuple with the appropriate entries. Any other object that provides a similar API for attribute-based access of \(L_x\), \(L_y\), \(L_z\), \(xy\), \(xz\), and \(yz\) (or some subset) will work equally well. This method is suitable for passing in box objects constructed by some other program, for example.

[4]:
from collections import namedtuple
MyBox = namedtuple('mybox', ['Lx', 'Ly', 'Lz', 'xy', 'xz', 'yz', 'dimensions'])

box = freud.box.Box.from_box(MyBox(Lx=5, Ly=3, Lz=2, xy=0, xz=0, yz=0, dimensions=3))
print("Box from named tuple: \n\t{}\n".format(box))

box = freud.box.Box.from_box(MyBox(Lx=5, Ly=3, Lz=0, xy=0, xz=0, yz=0, dimensions=2))
print("2D Box from named tuple: \n\t{}".format(box))
Box from named tuple:
        freud.box.Box(Lx=5.0, Ly=3.0, Lz=2.0, xy=0.0, xz=0.0, yz=0.0, is2D=False)

2D Box from named tuple:
        freud.box.Box(Lx=5.0, Ly=3.0, Lz=0.0, xy=0.0, xz=0.0, yz=0.0, is2D=True)

Similarly, construction is also possible using any object that supports key-value indexing, such as a dict.

[5]:
box = freud.box.Box.from_box(dict(Lx=5, Ly=3, Lz=2))
print("Box from dict: \n\t{}".format(box))
Box from dict:
        freud.box.Box(Lx=5.0, Ly=3.0, Lz=2.0, xy=0.0, xz=0.0, yz=0.0, is2D=False)
From a list

Finally, boxes can be constructed from any simple iterable that provides the elements in the correct order.

[6]:
box = freud.box.Box.from_box((5, 6, 7, 0.5, 0, 0.5))
print("Box from tuple: \n\t{}\n".format(box))

box = freud.box.Box.from_box([5, 6])
print("2D Box from list: \n\t{}".format(box))
Box from tuple:
        freud.box.Box(Lx=5.0, Ly=6.0, Lz=7.0, xy=0.5, xz=0.0, yz=0.5, is2D=False)

2D Box from list:
        freud.box.Box(Lx=5.0, Ly=6.0, Lz=0.0, xy=0.0, xz=0.0, yz=0.0, is2D=True)
Convenience APIs

We also provide convenience constructors for common geometries, namely square (2D) and cubic (3D) boxes.

[7]:
cube_box = freud.box.Box.cube(L=5)
print("Cubic Box: \n\t{}\n".format(cube_box))

square_box = freud.box.Box.square(L=5)
print("Square Box: \n\t{}".format(square_box))
Cubic Box:
        freud.box.Box(Lx=5.0, Ly=5.0, Lz=5.0, xy=0.0, xz=0.0, yz=0.0, is2D=False)

Square Box:
        freud.box.Box(Lx=5.0, Ly=5.0, Lz=0.0, xy=0.0, xz=0.0, yz=0.0, is2D=True)
Export

If you want to export or display the box, you can export box objects into their matrix or namedtuple representations, which provide completely specified descriptions of the box. Note that the namedtuple type used by freud boxes, the BoxTuple, is simply an internal representation.

[8]:
cube_box = freud.box.Box.cube(L=5)
cube_box.to_matrix()
[8]:
array([[5., 0., 0.],
       [0., 5., 0.],
       [0., 0., 5.]])
[9]:
cube_box.to_tuple()
[9]:
BoxTuple(Lx=5.0, Ly=5.0, Lz=5.0, xy=0.0, xz=0.0, yz=0.0)
Using boxes

Given a freud box object, you can query it for all its attributes.

[10]:
box = freud.box.Box.from_matrix([[10, 0, 0], [0, 10, 0], [0, 0, 10]])
print("L_x = {}, L_y = {}, L_z = {}, xy = {}, xz = {}, yz = {}".format(
    box.Lx, box.Ly, box.Lz, box.xy, box.xz, box.yz))

print("The length vector: {}".format(box.L))
print("The inverse length vector: ({:1.2f}, {:1.2f}, {:1.2f})".format(*[L for L in box.Linv]))
L_x = 10.0, L_y = 10.0, L_z = 10.0, xy = 0.0, xz = 0.0, yz = 0.0
The length vector: [10. 10. 10.]
The inverse length vector: (0.10, 0.10, 0.10)

Boxes also support converting to and from fractional coordinates.

Note that the origin in real coordinates is defined at the center of the box. This means the fractional coordinate range \([0, 1]\) maps onto \([-L/2, L/2]\), not \([0, L]\).

[11]:
# Conversion to coordinate representation from fractions.
print(box.makeCoordinates([0, 0, 0]))
print(box.makeCoordinates([0.5, 0.5, 0.5]))
print(box.makeCoordinates([0.8, 0.3, 1]))
print()

# Conversion to and from coordinate representation, resulting
# in the input fractions.
print(box.makeFraction(box.makeCoordinates([0, 0, 0])))
print(box.makeFraction(box.makeCoordinates([0.5, 0.5, 0.5])))
print("[{:1.1f}, {:1.1f}, {:1.1f}]".format(*box.makeFraction(box.makeCoordinates([0.8, 0.3, 1]))))
[-5. -5. -5.]
[0. 0. 0.]
[ 3. -2.  5.]

[0. 0. 0.]
[0.5 0.5 0.5]
[0.8, 0.3, 1.0]

Finally (and most critically for enforcing periodicity), boxes support wrapping vectors from outside the box into the box. The concept of periodicity and box wrapping is most easily demonstrated visually.

[12]:
# We define box plot generation separately
from util import box_2d_to_points

# Construct the box and get points for plotting
Lx = Ly = 10
xy = 0.5
box = freud.box.Box.from_matrix([[Lx, xy*Ly, 0], [0, Ly, 0], [0, 0, 0]])
points = box_2d_to_points(box)
[13]:
from matplotlib import pyplot as plt
fig, ax = plt.subplots(figsize=(9, 6))
ax.plot(points[:, 0], points[:, 1], color='k')
plt.show()
<Figure size 900x600 with 1 Axes>
[14]:
plt.figure()
plt.plot(points[:, 0], points[:, 1])
plt.show()
_images/examples_module_intros_Box-Box_26_0.png

With periodic boundary conditions, what this actually represents is an infinite set of these boxes tiling space. For example, you can locally picture this box as surrounding by a set of identical boxes.

[15]:
fig, ax = plt.subplots(figsize=(9, 6))
ax.plot(points[:, 0], points[:, 1], color='k')
ax.plot(points[:, 0] + Lx, points[:, 1], linestyle='dashed', color='k')
ax.plot(points[:, 0] - Lx, points[:, 1], linestyle='dashed', color='k')
ax.plot(points[:, 0] + xy*Ly, points[:, 1] + Ly, linestyle='dashed', color='k')
ax.plot(points[:, 0] - xy*Ly, points[:, 1] - Ly, linestyle='dashed', color='k')
plt.show()
_images/examples_module_intros_Box-Box_28_0.png

Any particles in the original box will also therefore be seen as existing in all the neighboring boxes.

[16]:
np.random.seed(0)
tmp = np.random.rand(5, 2)
origin = np.array(box.makeCoordinates([0, 0, 0]))
u = np.array(box.makeCoordinates([1, 0, 0])) - origin
v = np.array(box.makeCoordinates([0, 1, 0])) - origin
particles = u*tmp[:, [0]] + v*tmp[:, [1]]
[17]:
fig, ax = plt.subplots(figsize=(9, 6))

# Plot the boxes.
ax.plot(points[:, 0], points[:, 1], color='k')
ax.plot(points[:, 0] + Lx, points[:, 1], linestyle='dashed', color='k')
ax.plot(points[:, 0] - Lx, points[:, 1], linestyle='dashed', color='k')
ax.plot(points[:, 0] + xy*Ly, points[:, 1] + Ly, linestyle='dashed', color='k')
ax.plot(points[:, 0] - xy*Ly, points[:, 1] - Ly, linestyle='dashed', color='k')

# Plot the points in the original box.
ax.plot(particles[:, 0] + origin[0], particles[:, 1] + origin[1],
        linestyle='None', marker='.', color='#1f77b4')

# Define the different origins.
origins = []
origins.append(np.array(box.makeCoordinates([-1, 0, 0])))
origins.append(np.array(box.makeCoordinates([1, 0, 0])))
origins.append(np.array(box.makeCoordinates([0, -1, 0])))
origins.append(np.array(box.makeCoordinates([0, 1, 0])))

# Plot particles in each of the periodic boxes.
for o in origins:
    ax.plot(particles[:, 0] + o[0], particles[:, 1] + o[1],
            linestyle='None', marker='.', color='#1f77b4')
plt.show()
_images/examples_module_intros_Box-Box_31_0.png

Box wrapping takes points in the periodic images of a box, and brings them back into the original box. In this context, that means that if we apply wrap to each of the sets of particles plotted above, they should all overlap.

[18]:
fig, axes = plt.subplots(2, 2, figsize=(12, 8))

# Plot the boxes.
for i, ax in enumerate(axes.flatten()):
    ax.plot(points[:, 0], points[:, 1], color='k')
    ax.plot(points[:, 0] + Lx, points[:, 1], linestyle='dashed', color='k')
    ax.plot(points[:, 0] - Lx, points[:, 1], linestyle='dashed', color='k')
    ax.plot(points[:, 0] + xy*Ly, points[:, 1] + Ly, linestyle='dashed', color='k')
    ax.plot(points[:, 0] - xy*Ly, points[:, 1] - Ly, linestyle='dashed', color='k')

    # Plot the points relative to origin i.
    o = origins[i]
    ax.plot(particles[:, 0] + o[0], particles[:, 1] + o[1],
            linestyle='None', marker='.', label='Original')

    # Now wrap these points and plot them.
    wrapped_particles = box.wrap(particles + o)
    ax.plot(wrapped_particles[:, 0], wrapped_particles[:, 1],
            linestyle='None', marker='.', label='Wrapped')
    ax.tick_params(axis="both", which="both", labelsize=14)

    ax.legend(fontsize=14)
plt.show()
_images/examples_module_intros_Box-Box_33_0.png
ParticleBuffer - Unit Cell RDF

The ParticleBuffer class is meant to replicate particles beyond a single image while respecting box periodicity. This example demonstrates how we can use this to compute the radial distribution function from a sample crystal’s unit cell.

[1]:
import freud
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from util import box_2d_to_points

Here, we create a box to represent the unit cell and put two points inside. We plot the box and points below.

[2]:
box = freud.box.Box(Lx=2, Ly=2, xy=np.sqrt(1/3), is2D=True)
points = np.asarray([[-0.5, -0.5, -0.5], [0.5, 0.5, 0.5]])
corners = box_2d_to_points(box)
ax = plt.gca()
box_patch = plt.Polygon(corners[:, :2])
patch_collection = matplotlib.collections.PatchCollection([box_patch], edgecolors='black', alpha=0.4)
ax.add_collection(patch_collection)
plt.scatter(points[:, 1], points[:, 2])
plt.show()
_images/examples_module_intros_Box-ParticleBuffer_3_0.png

Next, we create a ParticleBuffer instance and have it compute the “buffer” particles that lie outside the first periodicity. These positions are stored in the buffer_positions attribute. The corresponding buffer_ids array gives a mapping from the index of the buffer particle to the index of the particle it was replicated from, in the original array of points. Finally, the buffer_box attribute returns a larger box, expanded from the original box to contain the replicated points.

[3]:
pbuff = freud.box.ParticleBuffer(box)
pbuff.compute(points, 6, images=True)
print(pbuff.buffer_particles[:10], '...')
[[ 0.6547002  1.5        0.       ]
 [ 1.8094003  3.5        0.       ]
 [ 2.9641018  5.5        0.       ]
 [-3.9641013 -6.5        0.       ]
 [-2.809401  -4.4999995  0.       ]
 [-1.6547002 -2.5000005  0.       ]
 [ 1.5000002 -0.5        0.       ]
 [ 2.6547008  1.5        0.       ]
 [ 3.8094003  3.5        0.       ]
 [ 4.964102   5.5        0.       ]] ...

Below, we plot the original unit cell and the replicated buffer points and buffer box.

[4]:
plt.scatter(points[:, 0], points[:, 1])
plt.scatter(pbuff.buffer_particles[:, 0], pbuff.buffer_particles[:, 1])
box_patch = plt.Polygon(corners[:, :2])
buff_corners = box_2d_to_points(pbuff.buffer_box)
buff_box_patch = plt.Polygon(buff_corners[:, :2])
patch_collection = matplotlib.collections.PatchCollection(
    [box_patch, buff_box_patch], facecolors=['blue', 'orange'],
    edgecolors='black', alpha=0.2)
plt.gca().add_collection(patch_collection)
plt.show()
_images/examples_module_intros_Box-ParticleBuffer_7_0.png

Finally, we can plot the radial distribution function (RDF) of this replicated system, using a value of rmax that is larger than the size of the original box. This allows us to see the interaction of the original particles in ref_points with their replicated neighbors from the buffer in points.

[5]:
rdf = freud.density.RDF(rmax=5, dr=0.02)
rdf.compute(pbuff.buffer_box, ref_points=points, points=pbuff.buffer_particles)
plt.plot(rdf.R, rdf.RDF)
plt.show()
_images/examples_module_intros_Box-ParticleBuffer_9_0.png
LinkCell

Many of the most powerful analyses of particle simulations involve some characterization of the local environments of particles. Whether the analyses involve finding clusters, identifying interfaces, computing order parameters, or something else entirely, they always require finding particles in proximity to others so that properties of the local environment can be computed. The freud.locality.NeighborList and freud.locality.LinkCell classes are the fundamental building blocks for this type of calculation. The NeighborList class is essentially a container for particle pairs that are determined to be adjacent to one another. The LinkCell class implements the standard linked-list cell algorithm, in which a cell list is computed using linked lists to store the particles in each cell. In this notebook, we provide a brief demonstration of how this data structure works and how it is used throughout freud.

We begin by demonstrating how a cell list works, which is essentially by dividing space into fixed width cells.

[1]:
from __future__ import division
import freud
import numpy as np
from matplotlib import pyplot as plt
import timeit

# place particles 0 and 1 in cell 0
# place particle 2 in cell 1
# place particles 3,4,5 in cell 3
# and no particles in cells 4,5,6,7
particles = np.array([[-0.5, -0.5, 0],
                   [-0.6, -0.6, 0],
                   [0.5, -0.5, 0],
                   [-0.5, 0.5, 0],
                   [-0.6, 0.6, 0],
                   [-0.7, 0.7, 0]], dtype='float32')


L = 2  # The box size
r_max = 1  # The cell width, and the nearest neighbor distance
box = freud.box.Box.square(L)
lc = freud.locality.LinkCell(box, r_max)
lc.compute(box, particles)

for c in range(0, lc.num_cells):
    print("The following particles are in cell {}: {}".format(c, ', '.join([str(x) for x in lc.itercell(c)])))
The following particles are in cell 0: 0, 1
The following particles are in cell 1: 2
The following particles are in cell 2: 3, 4, 5
The following particles are in cell 3:
[2]:
from matplotlib import patches
from matplotlib import cm
cmap = cm.get_cmap('plasma')
colors = [cmap(i/lc.num_cells) for i in range(lc.num_cells)]

fig, ax = plt.subplots(figsize=(9, 6))
ax.scatter(particles[:, 0], particles[:, 1])
ax.set_xlim([-1, 1])
ax.set_ylim([-1, 1])
corners = [(-1, -1), (0, -1), (-1, 0), (0, 0)]
handles = []
labels = []
for i, corner in enumerate(corners):
    p = patches.Rectangle(corner, 1, 1, color=colors[i], alpha=0.3)
    ax.add_patch(p)
    handles.append(p)
    labels.append("Cell {}".format(i))
ax.tick_params(axis='both', which='both', labelsize=14)
fig.legend(handles, labels, fontsize=16)
fig.subplots_adjust(right=0.8)
for i, p in enumerate(particles):
    ax.text(p[0]+0.05, p[1]-0.03, "Particle {}".format(i), fontsize=14)
_images/examples_module_intros_Locality-LinkCell_3_0.png

The principle behind a cell list is that depending on how close particles have to be to be considered neighbors, we can construct a cell list of an appropriate width such that a given particle’s neighbors can always be found by only looking in the neighboring cells, saving us the work of checking all the other particles in the system. We can now extract the NeighborList object computed using this cell list for finding particle neighbors.

[3]:
nlist = lc.nlist
for i in set(nlist.index_i):
    js = nlist.index_j[nlist.index_i == i]
    print("The particles within a distance 1 of particle {} are: {}".format(
        i, ', '.join([str(j) for j in js])))
The particles within a distance 1 of particle 0 are: 1, 4, 5
The particles within a distance 1 of particle 1 are: 0, 2, 3, 4, 5
The particles within a distance 1 of particle 2 are: 1
The particles within a distance 1 of particle 3 are: 1, 4, 5
The particles within a distance 1 of particle 4 are: 0, 1, 3, 5
The particles within a distance 1 of particle 5 are: 0, 1, 3, 4

Finally, we can easily check this computation manually by just computing particle distances. Note that we need to be careful to make sure that we properly respect the box periodicity, which means that interparticle distances should be calculated according to the minimum image convention. In essence, this means that since the box is treated as being infinitely replicated in all directions, we have to ensure that each particle is only interacting with the closest copy of another particle. We can easily enforce this here by making sure that particle distances are never large than half the box length in any given dimension.

[4]:
def compute_distances(box, positions):
    """Compute pairwise particle distances, taking into account PBCs.

    Args:
        box (:class:`freud.box.Box`): The simulation box the particles live in.
        positions (:class:`np.ndarray`): The particle positions.
    """
    # First we shift all the particles so that the coordinates lie from
    # [0, L] rather than [-L/2, L/2].
    positions[:, 0] = np.mod(positions[:, 0]+box.Lx/2, box.Lx)
    positions[:, 1] = np.mod(positions[:, 1]+box.Ly/2, box.Ly)
    positions[:, 0] = np.mod(positions[:, 0]+box.Lx/2, box.Lx)
    positions[:, 1] = np.mod(positions[:, 1]+box.Ly/2, box.Ly)

    # To apply minimum image convention, we check if the distance is
    # greater than half the box length in either direction, and if it
    # is, we replace it with L-distance instead. We use broadcasting
    # to get all pairwise positions, then modify the pos2 array where
    # the distance is found to be too large for a specific pair.
    pos1, pos2 = np.broadcast_arrays(positions[np.newaxis, :, :], positions[:, np.newaxis, :])
    vectors = pos1 - pos2
    pos2[:, :, 0] = np.where(np.abs(vectors[:, :, 0]) > box.Lx/2,
                                box.Lx - np.abs(pos2[:, :, 0]),
                                pos2[:, :, 0])
    pos2[:, :, 1] = np.where(np.abs(vectors[:, :, 1]) > box.Ly/2,
                                box.Ly - np.abs(pos2[:, :, 1]),
                                pos2[:, :, 1])

    distances = np.linalg.norm(pos1 - pos2, axis=-1)
    return distances
[5]:
pairwise_distances = compute_distances(box, particles)
for i in range(pairwise_distances.shape[0]):
    js = np.where(pairwise_distances[i, :] < r_max)
    print("The particles within a distance 1 of particle {} are: {}".format(
        i, ', '.join([str(j) for j in js[0] if not j==i])))
The particles within a distance 1 of particle 0 are: 1, 4, 5
The particles within a distance 1 of particle 1 are: 0, 2, 3, 4, 5
The particles within a distance 1 of particle 2 are: 1
The particles within a distance 1 of particle 3 are: 1, 4, 5
The particles within a distance 1 of particle 4 are: 0, 1, 3, 5
The particles within a distance 1 of particle 5 are: 0, 1, 3, 4

For larger systems, however, such pairwise calculations would quickly become prohibitively expensive. The primary benefit of the LinkCell object is that it can dramatically improve this cost.

[6]:
log_Ns = np.arange(5, 12)
lc_times = []
naive_times = []
for log_N in log_Ns:
    print("Running for log_N = {}".format(log_N))
    particles = np.random.rand(int(2**log_N), 3)*L-L/2
    particles[:, 0] = 0
    lc_times.append(timeit.timeit("lc.compute(box, particles)", number=10, globals=globals()))
    naive_times.append(timeit.timeit("compute_distances(box, particles)", number=10, globals=globals()))
Running for log_N = 5
Running for log_N = 6
Running for log_N = 7
Running for log_N = 8
Running for log_N = 9
Running for log_N = 10
Running for log_N = 11
[7]:
fig, ax = plt.subplots()
ax.plot(2**log_Ns, lc_times, label="LinkCell")
ax.plot(2**log_Ns, naive_times, label="Naive Calculation")
ax.legend(fontsize=14)
ax.tick_params(axis='both', which='both', labelsize=14)
ax.set_xlabel("Number of particles", fontsize=14)
ax.set_ylabel("Time to compute (s)", fontsize=14);
_images/examples_module_intros_Locality-LinkCell_11_0.png
Nearest Neighbors

One of the basic computations required for higher-level computations (such as the hexatic order parameter) is finding the nearest neighbors of a particle. This tutorial will show you how to compute the nearest neighbors and visualize that data.

The algorithm is straightforward:

for each particle i:
    for each particle j in neighbor_cells(i):
        r_ij = position[j] - position[i]
        r = sqrt(dot(r_ij, r_ij))
        l_r_array.append(r)
        l_n_array.append(j)
        # sort by distance
        sort(n_array, r_array)
        neighbor_array[i] = n_array[:k]

The data sets used in this example are a system of hard hexagons, simulated in the NVT thermodynamic ensemble in HOOMD-blue, for a dense fluid of hexagons at packing fraction \(\phi = 0.65\) and a solid at packing fractions \(\phi = 0.75\).

[1]:
from bokeh.io import output_notebook
output_notebook()
from bokeh.models import Legend
from bokeh.plotting import figure, output_file, show
from bokeh.layouts import gridplot
import numpy as np
import time
from freud import parallel, box, locality
parallel.setNumThreads(4)
import util

# Create hexagon vertices
verts = util.make_polygon(sides=6, radius=0.6204)

# Define colors for our system
c_list = ["#30A2DA", "#FC4F30", "#E5AE38", "#6D904F", "#9757DB",
          "#188487", "#FF7F00", "#9A2C66", "#626DDA", "#8B8B8B"]
c_dict = dict()
c_dict[6] = c_list[0]
c_dict[5] = c_list[1]
c_dict[4] = c_list[2]
c_dict[3] = c_list[7]
c_dict[7] = c_list[4]

def render_plot(p, fbox):
    # Display box
    corners = util.box_2d_to_points(fbox)
    p.patches(xs=[corners[:-1, 0]], ys=[corners[:-1, 1]],
              fill_color=(0, 0, 0, 0), line_color="black", line_width=2)
    p.legend.location = 'bottom_center'
    p.legend.orientation = 'horizontal'
    util.default_bokeh(p)
    show(p)
Loading BokehJS ...
[2]:
# Load the data
data_path = "data/phi065"
box_data = np.load("{}/box_data.npy".format(data_path))
pos_data = np.load("{}/pos_data.npy".format(data_path))
quat_data = np.load("{}/quat_data.npy".format(data_path))
n_frames = pos_data.shape[0]
Viewing our system

Before proceeding, we should probably view our system first. freud does not make any assumptions about your data and is not specifically designed for any one visualization package. Here we use bokeh to render our system. Bokeh is not appropriate for real-time interaction with your simulation data, nor is it appropriate for 3D data, but is perfectly fine for rendering individual simulation frames, so we will use it here

[3]:
# Grab data from last frame
l_box = box_data[-1].tolist()
l_pos = pos_data[-1]
l_quat = quat_data[-1]
l_ang = 2*np.arctan2(np.copy(l_quat[:, 3]), np.copy(l_quat[:, 0]))

# Create box
fbox = box.Box.from_box(l_box)
side_length = max(fbox.Lx, fbox.Ly)
l_max = side_length / 2.0
l_max *= 1.1

# Take local vertices and rotate, translate into system coordinates
patches = util.local_to_global(verts, l_pos[:, :2], l_ang)

# Plot
p = figure(title="System Visualization",
           x_range=(-l_max, l_max),
           y_range=(-l_max, l_max))
p.patches(xs=patches[:, :, 0].tolist(), ys=patches[:, :, 1].tolist(),
          fill_color=(42, 126, 187), line_color="black", line_width=1.5, legend="Hexagons")
render_plot(p, fbox)

By eye, we can see regions where the hexagons appear close-packed, as well as regions where there are vacancies. We will be using the Nearest Neighbor object to investigate this in our system.

The Nearest Neighbor object

This module will give the indices of the \(k\) particles which are nearest to another particle. freud provides two different modes by which to compute the nearest neighbors, selected by the strict_cut variable:

  • strict_cut=False (default): The value for rmax is expanded until every particle has \(k\) nearest neighbors

  • strict_cut=True: the rmax value is not expanded, so that any “vacancies” in the number of neighbors found are filled with UINTMAX

strict_cut=False

First we show how to use the strict_cut=False mode to find the neighbors of a specific particle

[4]:
# Create freud nearest neighbor object
n_neigh = 6
nn = locality.NearestNeighbors(rmax=1.5, n_neigh=n_neigh, strict_cut=False)

# Compute nearest neighbors
nn.compute(fbox, l_pos, l_pos)
# Get the NeighborList
n_list = nn.nlist
# Get the neighbors for particle 0
pidx = 0
n_idxs = n_list.index_j[np.where(n_list.index_i == pidx)[0]]

# Get position, orientation for the central particle
center_pos = l_pos[np.newaxis, pidx]
center_ang = l_ang[np.newaxis, pidx]

# Get the positions, orientations for the neighbor particles
neigh_pos = np.zeros(shape=(n_neigh, 3), dtype=np.float32)
neigh_ang = np.zeros(shape=(n_neigh), dtype=np.float32)
neigh_pos[:] = l_pos[n_idxs]
neigh_ang[:] = l_ang[n_idxs]

# Create array of transformed positions
c_patches = util.local_to_global(verts, center_pos[:, 0:2], center_ang)
n_patches = util.local_to_global(verts, neigh_pos[:, 0:2], neigh_ang)

# Create array of colors
center_color = np.array([c_list[0] for _ in range(center_pos.shape[0])])
neigh_color = np.array([c_list[-1] for _ in range(neigh_pos.shape[0])])

# Plot
p = figure(title="Nearest Neighbors Visualization",
           x_range=(-l_max, l_max), y_range=(-l_max, l_max))
p.patches(xs=n_patches[:, :, 0].tolist(), ys=n_patches[:, :, 1].tolist(),
          fill_color=neigh_color.tolist(), line_color="black", legend="Neighbors")
p.patches(xs=c_patches[:, :, 0].tolist(), ys=c_patches[:, :, 1].tolist(),
          fill_color=center_color.tolist(), line_color="black", legend="Center")
render_plot(p, fbox)

Notice that nearest neighbors properly handles periodic boundary conditions.

We do the same thing below, but for a particle/neighbors not spanning the box.

[5]:
# Get the neighbors for particle 1000
pidx = 1000
segment_start = n_list.segments[pidx]
segment_end = segment_start + n_list.neighbor_counts[pidx]
n_idxs = n_list.index_j[segment_start:segment_end]

# Get position, orientation for the central particle
center_pos = l_pos[np.newaxis, pidx]
center_ang = l_ang[np.newaxis, pidx]

# Get positions, orientations for the neighbors and one non-neighbor
neigh_pos = l_pos[n_idxs]
neigh_ang = l_ang[n_idxs]
non_neigh_pos = neigh_pos[np.newaxis, -1]
non_neigh_ang = neigh_ang[np.newaxis, -1]

# Create array of transformed positions
c_patches = util.local_to_global(verts, center_pos[:, :2], center_ang)
n_patches = util.local_to_global(verts, neigh_pos[:, :2], neigh_ang)
non_n_patches = util.local_to_global(verts, non_neigh_pos[:, :2], non_neigh_ang)

# Create array of colors
center_color = np.array([c_list[0] for _ in range(center_pos.shape[0])])
neigh_color = np.array([c_list[-1] for _ in range(neigh_pos.shape[0])])

# Color the last particle differently
non_neigh_color = np.array([c_list[-2] for _ in range(non_neigh_pos.shape[0])])

# Plot
p = figure(title="Nearest Neighbors Visualization",
           x_range=(-l_max, l_max), y_range=(-l_max, l_max))
p.patches(xs=n_patches[:, :, 0].tolist(), ys=n_patches[:, :, 1].tolist(),
          fill_color=neigh_color.tolist(), line_color="black", legend="Neighbors")
p.patches(xs=non_n_patches[:, :, 0].tolist(), ys=non_n_patches[:, :, 1].tolist(),
          fill_color=non_neigh_color.tolist(), line_color="black", legend="Non-neighbor")
p.patches(xs=c_patches[:, :, 0].tolist(), ys=c_patches[:, :, 1].tolist(),
          fill_color=center_color.tolist(), line_color="black", legend="Center")
render_plot(p, fbox)

Notice that while freud found the 6 nearest neighbors, one of the particles isn’t really in a neighbor position (which we have colored purple). How do we go about finding particles with a deficit or surplus of neighbors?

strict_cut=True

Now for strict_cut=True. This mode allow you to find particles which have fewer than the specified number of particles. For this system, we’ll search for 8 neighbors, so that we can display particles with both a deficit and a surplus of neighbors.

[6]:
# Create freud nearest neighbors object
n_neigh = 8
rmax = 1.7
nn = locality.NearestNeighbors(rmax=rmax, n_neigh=n_neigh, strict_cut=True)

# Compute nearest neighbors
nn.compute(fbox, l_pos, l_pos)
# Get the neighborlist
n_list = nn.nlist
# Get the number of particles
num_particles = nn.n_ref

p = figure(title="Nearest Neighbors visualization",
           x_range=(-l_max, l_max), y_range=(-l_max, l_max))
for k in np.unique(n_list.neighbor_counts):
    # Find particles with k neighbors
    c_idxs = np.where(n_list.neighbor_counts == k)[0]
    center_pos = l_pos[c_idxs]
    center_ang = l_ang[c_idxs]
    c_patches = util.local_to_global(verts, center_pos[:, 0:2], center_ang)
    center_color = np.array([c_dict[k] for _ in range(center_pos.shape[0])])
    p.patches(xs=c_patches[:, :, 0].tolist(), ys=c_patches[:, :, 1].tolist(),
              fill_color=center_color.tolist(), line_color="black", legend="k={}".format(k))
render_plot(p, fbox)
Visualize each set of k values independently
[7]:
for k in np.unique(n_list.neighbor_counts):
    p = figure(title="Nearest Neighbors: k={}".format(k),
               x_range=(-l_max, l_max), y_range=(-l_max, l_max))
    # Find particles with k neighbors
    c_idxs = np.where(n_list.neighbor_counts == k)[0]
    center_pos = l_pos[c_idxs]
    center_ang = l_ang[c_idxs]
    c_patches = util.local_to_global(verts, center_pos[:, 0:2], center_ang)
    patches = util.local_to_global(verts, l_pos[:, 0:2], l_ang)
    center_color = np.array([c_dict[k] for _ in range(center_pos.shape[0])])
    p.patches(xs=patches[:, :, 0].tolist(), ys=patches[:, :, 1].tolist(),
              fill_color=(128, 128, 128, 0.1), line_color=(0, 0, 0, 0.1), legend="other")
    p.patches(xs=c_patches[:, :, 0].tolist(), ys=c_patches[:, :, 1].tolist(),
              fill_color=center_color.tolist(), line_color="black", legend="k={}".format(k))
    render_plot(p, fbox)
[8]:
for k in np.unique(n_list.neighbor_counts):
    p = figure(title="Nearest Neighbors: k={}".format(k),
               x_range=(-l_max, l_max), y_range=(-l_max, l_max))
    # Find particles with k neighbors
    c_idxs = np.copy(np.where(n_list.neighbor_counts == k)[0])
    center_pos = l_pos[c_idxs]
    center_ang = l_ang[c_idxs]

    neigh_pos = np.zeros(shape=(k*len(c_idxs), 3), dtype=np.float32)
    neigh_ang = np.zeros(shape=(k*len(c_idxs)), dtype=np.float32)
    for i, pidx in enumerate(c_idxs):
        # Create a list of positions, angles to draw
        segment_start = n_list.segments[pidx]
        segment_end = segment_start + n_list.neighbor_counts[pidx]
        n_idxs = n_list.index_j[segment_start:segment_end]
        for j, nidx in enumerate(n_idxs):
            neigh_pos[k*i+j] = l_pos[nidx]
            neigh_ang[k*i+j] = l_ang[nidx]
    c_patches = util.local_to_global(verts, center_pos[:, 0:2], center_ang)
    n_patches = util.local_to_global(verts, neigh_pos[:, 0:2], neigh_ang)
    patches = util.local_to_global(verts, l_pos[:, 0:2], l_ang)
    center_color = np.array([c_dict[k] for _ in range(center_pos.shape[0])])
    neigh_color = np.array([c_list[-1] for _ in range(neigh_pos.shape[0])])
    p.patches(xs=patches[:, :, 0].tolist(), ys=patches[:, :, 1].tolist(),
              fill_color=(128, 128, 128, 0.1), line_color=(0, 0, 0, 0.1), legend="other")
    p.patches(xs=n_patches[:, :, 0].tolist(), ys=n_patches[:, :, 1].tolist(),
              fill_color=neigh_color.tolist(), line_color="black", legend="neighbors")
    p.patches(xs=c_patches[:, :, 0].tolist(), ys=c_patches[:, :, 1].tolist(),
              fill_color=center_color.tolist(), line_color="black", legend="k={}".format(k))
    render_plot(p, fbox)
Compare against a solid/crystalline system

Visualize in the same way, but with a solid system. Notice that almost all the hexagons have 6 nearest neighbors.

[9]:
data_path = "data/phi075"
box_data = np.load("{}/box_data.npy".format(data_path))
pos_data = np.load("{}/pos_data.npy".format(data_path))
quat_data = np.load("{}/quat_data.npy".format(data_path))
n_frames = pos_data.shape[0]

# Grab data from last frame
l_box = box_data[-1].tolist()
l_pos = pos_data[-1]
l_quat = quat_data[-1]
l_ang = 2*np.arctan2(np.copy(l_quat[:, 3]), np.copy(l_quat[:, 0]))

# Create box
fbox = box.Box.from_box(l_box)
side_length = max(fbox.Lx, fbox.Ly)
l_max = side_length / 2.0
l_max *= 1.1

# Create freud nearest neighbor object
n_neigh = 8
rmax = 1.65
nn = locality.NearestNeighbors(rmax=rmax, n_neigh=n_neigh, strict_cut=True)

# Compute nearest neighbors
nn.compute(fbox, l_pos, l_pos)
# Get the neighborlist
n_list = nn.nlist

p = figure(title="Nearest Neighbors Visualization",
           x_range=(-l_max, l_max), y_range=(-l_max, l_max))
for k in np.unique(n_list.neighbor_counts):
    # find particles with k neighbors
    c_idxs = np.where(n_list.neighbor_counts == k)[0]
    center_pos = l_pos[c_idxs]
    center_ang = l_ang[c_idxs]
    c_patches = util.local_to_global(verts, center_pos[:, 0:2], center_ang)
    center_color = np.array([c_dict[k] for _ in range(center_pos.shape[0])])
    p.patches(xs=c_patches[:, :, 0].tolist(), ys=c_patches[:, :, 1].tolist(),
              fill_color=center_color.tolist(), line_color="black", legend="k={}".format(k))
render_plot(p, fbox)

Analysis Modules

These introductory examples showcase the functionality of specific modules in freud, showing how they can be used to perform specific types of analyses of simulations.

Cluster

The cluster module uses a set of coordinates and a cutoff distance to determine clustered points. The example below generates random points, and shows that they form clusters. This case is two-dimensional (with \(z=0\) for all particles) for simplicity, but the cluster module works for both 2D and 3D simulations.

[1]:
import numpy as np
import freud
import matplotlib.pyplot as plt

First, we generate random points to cluster.

[2]:
points = np.empty(shape=(0, 2))
for center_point in [(0, 0), (2, 2), (-1, 2), (2, 3)]:
    points = np.concatenate(
        (points, np.random.multivariate_normal(mean=center_point, cov=0.05*np.eye(2), size=(100,))))
fig, ax = plt.subplots(1, 1, figsize=(9, 6))
ax.scatter(points[:,0], points[:,1])
ax.set_title('Raw points before clustering', fontsize=20)
ax.tick_params(axis='both', which='both', labelsize=14, size=8)
plt.show()

# We must add a z=0 component to this array for freud
points = np.hstack((points, np.zeros((points.shape[0], 1))))
_images/examples_module_intros_Cluster-Cluster_3_0.png

Now we create a box and a cluster compute object.

[3]:
box = freud.box.Box.square(L=10)
cl = freud.cluster.Cluster(box=box, rcut=1.0)

Next, we use the computeClusters method to determine clusters and the clusterIdx property to return their identities. Note that we use freud’s method chaining here, where a compute method returns the compute object.

[4]:
cluster_idx = cl.computeClusters(points).cluster_idx
print(cluster_idx)
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]
[5]:
fig, ax = plt.subplots(1, 1, figsize=(9, 6))
for cluster_id in range(cl.num_clusters):
    cluster_point_indices = np.where(cluster_idx == cluster_id)[0]
    ax.scatter(points[cluster_point_indices, 0], points[cluster_point_indices, 1], label="Cluster {}".format(cluster_id))
    print("There are {} points in cluster {}.".format(len(cluster_point_indices), cluster_id))
ax.set_title('Clusters identified', fontsize=20)
ax.legend(loc='best', fontsize=14)
ax.tick_params(axis='both', which='both', labelsize=14, size=8)
plt.show()
There are 100 points in cluster 0.
There are 200 points in cluster 1.
There are 100 points in cluster 2.
_images/examples_module_intros_Cluster-Cluster_8_1.png

We may also compute the clusters’ centers of mass and gyration tensor using the ClusterProperties class.

[6]:
clp = freud.cluster.ClusterProperties(box=box)
clp.computeProperties(points, cl.cluster_idx)
[6]:
freud.cluster.ClusterProperties(box=freud.box.Box(Lx=10.0, Ly=10.0, Lz=0.0, xy=0.0, xz=0.0, yz=0.0, is2D=True))

Plotting these clusters with their centers of mass, with size proportional to the number of clustered points:

[7]:
plt.figure(figsize=(9, 6))
for cluster_id in range(cl.num_clusters):
    cluster_point_indices = np.where(cluster_idx == cluster_id)[0]
    plt.scatter(points[cluster_point_indices, 0], points[cluster_point_indices, 1],
                label="Cluster {}".format(cluster_id))
for cluster_id in range(cl.num_clusters):
    cluster_point_indices = np.where(cluster_idx == cluster_id)[0]
    plt.scatter(clp.cluster_COM[cluster_id, 0], clp.cluster_COM[cluster_id, 1],
                s=len(cluster_point_indices),
                label="Cluster {} COM".format(cluster_id))
plt.title('Center of mass for each cluster', fontsize=20)
plt.legend(loc='best', fontsize=14)
plt.gca().tick_params(axis='both', which='both', labelsize=14, size=8)
plt.show()
_images/examples_module_intros_Cluster-Cluster_12_0.png

The 3x3 gyration tensors \(G\) can also be computed for each cluster. For this two-dimensional case, the \(z\) components of the gyration tensor are zero.

[8]:
for cluster_id in range(cl.num_clusters):
    G = clp.cluster_G[cluster_id]
    print("The gyration tensor of cluster {} is:\n{}".format(cluster_id, G))
The gyration tensor of cluster 0 is:
[[0.06236173 0.00625861 0.        ]
 [0.00625861 0.05044873 0.        ]
 [0.         0.         0.        ]]
The gyration tensor of cluster 1 is:
[[ 0.05807656 -0.0082754   0.        ]
 [-0.0082754   0.3028591   0.        ]
 [ 0.          0.          0.        ]]
The gyration tensor of cluster 2 is:
[[ 0.04002637 -0.0031836   0.        ]
 [-0.0031836   0.04837101  0.        ]
 [ 0.          0.          0.        ]]

This gyration tensor can be used to determine the principal axes of the cluster and radius of gyration along each principal axis. Here, we plot the gyration tensor’s eigenvectors with length corresponding to the square root of the eigenvalues (the singular values).

[9]:
plt.figure(figsize=(9, 6))
for cluster_id in range(cl.num_clusters):
    cluster_point_indices = np.where(cluster_idx == cluster_id)[0]
    plt.scatter(points[cluster_point_indices, 0], points[cluster_point_indices, 1], label="Cluster {}".format(cluster_id))
for cluster_id in range(cl.num_clusters):
    com = clp.cluster_COM[cluster_id]
    G = clp.cluster_G[cluster_id]
    evals, evecs = np.linalg.eig(G)
    radii = np.sqrt(evals)
    for evalue, evec in zip(evals[:2], evecs[:2, :2]):
        print("Cluster {} has radius of gyration {:.4f} along the axis of ({:.4f}, {:.4f}).".format(cluster_id, evalue, *evec))
    arrows = (radii * evecs)[:2, :2]
    for arrow in arrows:
        plt.arrow(com[0], com[1], arrow[0], arrow[1], width=0.08)
plt.title('Eigenvectors of the gyration tensor for each cluster', fontsize=20)
plt.legend(loc='best', fontsize=14)
plt.gca().tick_params(axis='both', which='both', labelsize=14, size=8)
Cluster 0 has radius of gyration 0.0650 along the axis of (0.9191, -0.3941).
Cluster 0 has radius of gyration 0.0478 along the axis of (0.3941, 0.9191).
Cluster 1 has radius of gyration 0.0578 along the axis of (-0.9994, 0.0337).
Cluster 1 has radius of gyration 0.3031 along the axis of (-0.0337, -0.9994).
Cluster 2 has radius of gyration 0.0390 along the axis of (-0.9474, 0.3202).
Cluster 2 has radius of gyration 0.0494 along the axis of (-0.3202, -0.9474).
_images/examples_module_intros_Cluster-Cluster_16_1.png
Density - ComplexCF
Orientational Ordering in 2D

The freud.density module is intended to compute a variety of quantities that relate spatial distributions of particles with other particles. This example shows how correlation functions can be used to measure orientational order in 2D.

[1]:
import numpy as np
import freud
import matplotlib.pyplot as plt
import matplotlib.cm
from matplotlib.colors import Normalize

This helper function will make plots of the data we generate in this example.

[2]:
def plot_data(title, points, angles, values, box, ccf, s=200):
    cmap = matplotlib.cm.viridis
    norm = Normalize(vmin=-np.pi/4, vmax=np.pi/4)
    plt.figure(figsize=(16, 6))
    plt.subplot(121)
    for point, angle, value in zip(points, angles, values):
        plt.scatter(point[0], point[1], marker=(4, 0, np.rad2deg(angle)+45),
                    edgecolor='k', c=[cmap(norm(angle))], s=s)
    plt.title(title)
    plt.gca().set_xlim([-box.Lx/2, box.Lx/2])
    plt.gca().set_ylim([-box.Ly/2, box.Ly/2])
    plt.gca().set_aspect('equal')
    sm = plt.cm.ScalarMappable(cmap='viridis', norm=norm)
    sm.set_array(angles)
    plt.colorbar(sm)
    plt.subplot(122)
    plt.title('Orientation Spatial Autocorrelation Function')
    plt.plot(ccf.R, np.real(ccf.RDF))
    plt.xlabel(r'$r$')
    plt.ylabel(r'$C(r)$')
    plt.show()

First, let’s generate a 2D structure with perfect orientational order and slight positional disorder (the particles are not perfectly on a grid, but they are perfectly aligned). The color of the particles corresponds to their angle of rotation, so all the particles will be the same color to begin with.

We create a freud.density.ComplexCF object to compute the correlation functions. Given a particle orientation \(\theta\), we compute its complex orientation value (the quantity we are correlating) as \(s = e^{4i\theta}\), to account for the fourfold symmetry of the particles. We will compute the correlation function \(C(r) = \left\langle s^*_i(0) \cdot s_j(r) \right\rangle\) by taking the average over all particle pairs \(i, j\) and binning the results into a histogram by the distance \(r\) between particles \(i\) and \(j\).

When we compute the correlations between particles, we must use the complex conjugate of the values array for one of the arguments. This way, if \(\theta_1\) is close to \(\theta_2\), then we get \(\left(e^{4i\theta_1}\right)^* \cdot \left(e^{4i\theta_2}\right) = e^{4i(\theta_2-\theta_1)} \approx e^0 = 1\).

This system has perfect spatial correlation of the particle orientations, so we see \(C(r) = 1\) for all values of \(r\).

[3]:
def make_particles(L, repeats):
    # Initialize a box and particle spacing
    box = freud.box.Box.square(L=L)
    dx = box.Lx/repeats

    # Generate points and add random positional noise
    points = np.array([[i, j, 0] for j in np.arange(-(box.Ly+dx)/2, (box.Ly+dx)/2, dx)
                       for i in np.arange(-(box.Lx+dx)/2, (box.Lx+dx)/2, dx)])
    cov = 1e-4*box.Lx*np.eye(3)
    cov[2, 2] = 0
    points += np.random.multivariate_normal(mean=np.zeros(3), cov=cov, size=len(points))
    return box, points

# Make a small system
box, points = make_particles(L=5, repeats=20)

# All the particles begin with their orientation at 0
angles = np.zeros(len(points))
values = np.array(np.exp(angles * 4j))

# Create the ComplexCF compute object and compute the correlation function
ccf = freud.density.ComplexCF(rmax=box.Lx/2.01, dr=box.Lx/50)
ccf.compute(box, points, np.conj(values), points, values)


plot_data('Particles before introducing Orientational Disorder',
          points, angles, values, box, ccf)
_images/examples_module_intros_Density-ComplexCF_5_0.png

Now we will generate random angles from \(-\frac{\pi}{4}\) to \(\frac{\pi}{4}\), which orients our squares randomly. The four-fold symmetry of the squares means that the space of unique angles is restricted to a range of \(\frac{\pi}{2}\). Again, we compute a complex value for each particle, \(s = e^{4i\theta}\).

Because we have purely random orientations, we expect no spatial correlations in the plot above. As we see, \(C(r) \approx 0\) for all \(r\).

[4]:
# Change the angles to values randomly drawn from a uniform distribution
angles = np.random.uniform(-np.pi/4, np.pi/4, size=len(points))
values = np.exp(angles * 4j)

# Recompute the correlation functions
ccf.compute(box, points, np.conj(values), points, values)

plot_data('Particles with Orientational Disorder',
          points, angles, values, box, ccf)
_images/examples_module_intros_Density-ComplexCF_7_0.png

The plot below shows what happens when we intentionally introduce a correlation length by adding a spatial pattern to the particle orientations. At short distances, the correlation is very high. As \(r\) increases, the oppositely-aligned part of the pattern some distance away causes the correlation to drop.

[5]:
# Use angles that vary spatially in a pattern
angles = np.pi/4 * np.cos(2*np.pi*points[:, 0]/box.Lx)
values = np.exp(angles * 4j)

# Recompute the correlation functions
ccf.compute(box, points, np.conj(values), points, values)

plot_data('Particles with Spatially Correlated Orientations',
          points, angles, values, box, ccf)
_images/examples_module_intros_Density-ComplexCF_9_0.png

In the larger system shown below, we see the spatial autocorrelation rise and fall with damping oscillations.

[6]:
# Make a large system
box, points = make_particles(L=10, repeats=40)

# Use angles that vary spatially in a pattern
angles = np.pi/4 * np.cos(8*np.pi*points[:, 0]/box.Lx)
values = np.exp(angles * 4j)

# Create a ComplexCF compute object
ccf = freud.density.ComplexCF(rmax=box.Lx/2.01, dr=box.Lx/50)
ccf.compute(box, points, np.conj(values), points, values)

plot_data('Larger System with Spatially Correlated Orientations',
          points, angles, values, box, ccf, s=80)
_images/examples_module_intros_Density-ComplexCF_11_0.png
FloatCF
Grain Size Determination

The freud.density module is intended to compute a variety of quantities that relate spatial distributions of particles with other particles. This example shows how correlation functions can be used to approximate the grain size within a system.

[1]:
import freud
import numpy as np
import matplotlib.pyplot as plt

To show the correlation function, we need a pretend data set. We start with a phase separated Ising lattice and assign type values, either blue (-1) or red (+1), to generate “grains.”

To make the pretend data set, we create a large number of blue (-1) particles on a square grid. Then we place grain centers on a larger grid and draw grain radii from a normal distribution. We color the particles red (+1) if their distance from a grain center is less than the grain radius.

[2]:
# Set up the system
box = freud.box.Box.square(L=10)
dx = 0.15
num_grains = 4
dg = box.Lx/num_grains
points = np.array([[i, j, 0]
                   for j in np.arange(-box.Ly/2, box.Ly/2, dx)
                   for i in np.arange(-box.Lx/2, box.Lx/2, dx)])
values = np.array([-1 for p in points])
centroids = [[i*dg + 0.5*dg, j*dg + 0.5*dg, 0]
             for i in range(num_grains) for j in range(num_grains)]
grain_radii = np.abs(np.random.normal(size=num_grains**2, loc=0.25*dg, scale=0.05*dg))
for center, radius in zip(centroids, grain_radii):
    lc = freud.locality.LinkCell(box, radius).compute(box, points, [center])
    for i in lc.nlist.index_i:
        values[i] = 1

plt.figure(figsize=(8, 8))
plt.scatter(points[values > 0, 0],
            points[values > 0, 1],
            marker='o', color='red', s=25)
plt.scatter(points[values < 0, 0],
            points[values < 0, 1],
            marker='o', color='blue', s=25)
plt.show()
_images/examples_module_intros_Density-FloatCF_3_0.png

This system is phase-separated because the red particles are generally near one another, and so are the blue particles. However, there is some length scale at which the phase separation is no longer visible. Imagine looking at this image from a far distance away: the red and blue regions would be indistinguishable, and the picture would appear purple.

We can use the correlation lengths between the grain centers and the surrounding red and blue particles to find where the grains end. First, we need to locate the centers of the grains using freud.cluster.

[3]:
cl = freud.cluster.Cluster(box=box, rcut=2*dx)
cluster_idx = cl.computeClusters(points[values > 0, :]).cluster_idx

Now we’ll find the cluster centroids (the grain centers).

[4]:
clp = freud.cluster.ClusterProperties(box=box)
clp.computeProperties(points[values > 0, :], cl.cluster_idx)
[4]:
freud.cluster.ClusterProperties(box=freud.box.Box(Lx=10.0, Ly=10.0, Lz=0.0, xy=0.0, xz=0.0, yz=0.0, is2D=True))

Now we create a freud.density.FloatCF compute object, to compute the correlation function.

[5]:
fcf = freud.density.FloatCF(rmax=0.5*box.Lx/num_grains, dr=dx)
fcf.reset()

Now we compute the product of type values. Knowing that two sites with the same type will have a product of 1, we can estimate the radius of a grain as the smallest value of \(r\) such that \(\langle p \ast q \rangle (r) < 0\), where the set \(p\) represents our red and blue particles and \(q\) represents the grain centers.

[6]:
plt.figure(figsize=(10, 5))
weights = []
sizes = []
measured_grain_radii = []
for cluster_id in range(cl.num_clusters):
    fcf.compute(box=box,
                ref_points=points,  # all points in the system
                ref_values=values,   # all types in the system
                points=[clp.cluster_COM[cluster_id]],
                values=[1])
    # get the center of the histogram bins
    r = fcf.R
    # get the value of the histogram bins
    y = fcf.RDF
    grainsize = y[y > 0][-1] * (r[y < 0][0]-r[y > 0][-1]) / \
        (y[y > 0][-1]-y[y < 0][0]) + r[y > 0][-1]
    measured_grain_radii.append(grainsize)

    plt.plot(r, y)
    plt.scatter(x=grainsize, y=0, label='Cluster {}, r={}'.format(
        cluster_id, round(grainsize, 3)))

plt.title("Grain Center Spatial Correlation", fontsize=16)
plt.xlabel(r"$r$", fontsize=14)
plt.ylabel(r"$\langle p \ast q \rangle (r)$", fontsize=14)
plt.legend()
plt.tick_params(which="both", axis="both", labelsize=12)
plt.show()
_images/examples_module_intros_Density-FloatCF_11_0.png

This table shows the grain radii we expected compared to the ones we computed.

[7]:
print('Actual Radius\tMeasured Radius')
for actual, measured in zip(sorted(grain_radii), sorted(measured_grain_radii)):
    print('{:.4f}\t\t{:.4f}'.format(actual, measured))
Actual Radius   Measured Radius
0.4628          0.4821
0.5246          0.5573
0.5345          0.5684
0.5457          0.5684
0.6608          0.6471
0.6705          0.6471
0.6825          0.6831
0.6879          0.6831
0.6939          0.6831
0.6953          0.7226
0.7101          0.7330
0.7298          0.7383
0.7510          0.7525
0.7731          0.7640
0.7952          0.7712
0.8106          0.8115
GaussianDensity

The freud.density module is intended to compute a variety of quantities that relate spatial distributions of particles with other particles. In this notebook, we demonstrate freud’s Gaussian density calculation, which provides a way to interpolate particle configurations onto a regular grid in a meaningful way that can then be processed by other algorithms that require regularity, such as a Fast Fourier Transform.

[1]:
import numpy as np
from scipy import stats
import freud
import matplotlib.pyplot as plt

To illustrate the basic concept, consider a toy example: a simple set of point particles with unit mass on a line. For analytical purposes, the standard way to accomplish this would be using Dirac delta functions.

[2]:
n_p = 10000
np.random.seed(129)
x = np.linspace(0, 1, n_p)
y = np.zeros(n_p)
points = np.random.rand(10)
y[(points*n_p).astype('int')] = 1
plt.plot(x, y);
plt.show()
_images/examples_module_intros_Density-GaussianDensity_3_0.png

However, delta functions can be cumbersome to work with, so we might instead want to smooth out these particles. One option is to instead represent particles as Gaussians centered at the location of the points. In that case, the total particle density at any point in the interval \([0, 1]\) represented above would be based on the sum of the densities of those Gaussians at those points.

[3]:
# Note that we use a Gaussian with a small standard deviation
# to emphasize the differences on this small scale
dists = [stats.norm(loc=i, scale=0.1) for i in points]
y_gaussian = 0
for dist in dists:
    y_gaussian += dist.pdf(x)
plt.plot(x, y_gaussian);
plt.show()
_images/examples_module_intros_Density-GaussianDensity_5_0.png

The goal of the GaussianDensity class is to perform the same interpolation for points on a 2D or 3D grid, accounting for Box periodicity.

[ ]:
N = 1000  # Number of points
L = 10  # Box length
box = freud.box.Box.square(L)
points = box.wrap(np.random.rand(N, 3)*L)
points[:, 2] = 0
gd = freud.density.GaussianDensity(L*L, L/3, 1)
gd.compute(box, points)

fig, axes = plt.subplots(1, 2, figsize=(14, 6))
axes[0].scatter(points[:, 0], points[:, 1])
im = axes[1].imshow(gd.gaussian_density, origin='lower',
                    extent=axes[0].get_xlim() + axes[0].get_ylim())
plt.colorbar(im);
plt.show()
_images/examples_module_intros_Density-GaussianDensity_7_0.png

The effects are much more striking if we explicitly construct our points to be centered at certain regions.

[ ]:
N = 1000  # Number of points
L = 10  # Box length
box = freud.box.Box.square(L)
centers = np.array([[L/4, L/4, 0],
                    [-L/4, L/4, 0],
                    [L/4, -L/4, 0],
                    [-L/4, -L/4, 0]])

points = []
for center in centers:
    points.append(np.random.multivariate_normal(center, cov=np.eye(3, 3), size=(int(N/4),)))
points = box.wrap(np.concatenate(points))
points[:, 2] = 0

gd = freud.density.GaussianDensity(L*L, L/3, 1)
gd.compute(box, points)

fig, axes = plt.subplots(1, 2, figsize=(14, 6))
axes[0].scatter(points[:, 0], points[:, 1])
im = axes[1].imshow(gd.gaussian_density, origin='lower',
                    extent=axes[0].get_xlim() + axes[0].get_ylim())
plt.colorbar(im);
plt.show()
LocalDensity

The freud.density module is intended to compute a variety of quantities that relate spatial distributions of particles with other particles. In this notebook, we demonstrate freud’s local density calculation, which can be used to characterize the particle distributions in some systems. In this example, we consider a toy example of calculating the particle density in the vicinity of a set of other points. This can be visualized as, for example, billiard balls on a table with certain regions of the table being stickier than others. In practice, this method could be used for analyzing, e.g, binary systems to determine how densely one species packs close to the surface of the other.

[1]:
import numpy as np
import freud
import util
import matplotlib.pyplot as plt
from matplotlib import patches
[2]:
# Define some helper plotting functions.
def add_patches(ax, points, radius=1, fill=False, color="#1f77b4", ls="solid", lw=None):
    """Add set of points as patches with radius to the provided axis"""
    for pt in points:
        p = patches.Circle(pt, fill=fill, linestyle=ls, radius=radius,
                           facecolor=color, edgecolor=color, lw=lw)
        ax.add_patch(p)

def plot_lattice(box, points, radius=1, ls="solid", lw=None):
    """Helper function for plotting points on a lattice."""
    fig, ax = plt.subplots(1, 1, figsize=(9, 9))
    box_points = util.box_2d_to_points(box)
    ax.plot(box_points[:, 0], box_points[:, 1], color='k')

    add_patches(ax, points, radius, ls=ls, lw=lw)
    return fig, ax

Let us consider a set of regions on a square lattice.

[3]:
area = 5
radius = np.sqrt(area/np.pi)
ref_area = area*100
ref_radius = np.sqrt(ref_area/np.pi)
num = 6
scale = num*4
box, ref_points = util.make_square(num, num, scale=scale)
ref_points[..., [0, 1]] += scale
fig, ax = plot_lattice(box, ref_points, ref_radius, ls="dashed", lw=2.5)
plt.tick_params(axis="both", which="both", labelsize=14)
plt.show()
_images/examples_module_intros_Density-LocalDensity_4_0.png

Now let’s add a set of points to this box. Points are added by drawing from a normal distribution centered at each of the regions above. For demonstration, we will assume that each region has some relative “attractiveness,” which is represented by the covariance in the normal distributions used to draw points. Specifically, as we go up and to the right, the covariance increases proportional to the distance from the lower right corner of the box.

[4]:
points = []
distances = np.linalg.norm(ref_points + np.array(box.L)/2, axis=-1)
cov_basis = 20*distances/np.min(distances)
for i, p in enumerate(ref_points):
    cov = cov_basis[i]*np.eye(3)
    cov[2, 2] = 0  # Nothing in z
    points.append(
        np.random.multivariate_normal(p, cov, size=(50,)))
points = box.wrap(np.concatenate(points))
[5]:
fig, ax = plot_lattice(box, ref_points, ref_radius, ls="dashed", lw=2.5)
plt.tick_params(axis="both", which="both", labelsize=14)
add_patches(ax, points, radius, True, 'k', lw=None)
plt.show()
_images/examples_module_intros_Density-LocalDensity_7_0.png

We see that the density increases as we move up and to the right. In order to compute the actual densities, we can leverage the LocalDensity class. The class allows you to specify a set of reference points around which the number of other points is computed. These other points can, but need not be, distinct from the reference points. In our case, we want to use the regions as our reference points with the small circles as our data points.

When we construct the LocalDensity class, there are three arguments. The first is the radius from the reference points within which particles should be included in the reference point’s counter. The second and third are the volume and the circumsphere diameters of the data points, not the reference points. This distinction is critical for getting appropriate density values, since these values are used to actually check cutoffs and calculate the density.

[6]:
density = freud.density.LocalDensity(ref_radius, area, radius)
num_neighbors = density.compute(box, ref_points, points).num_neighbors
densities = density.compute(box, ref_points, points).density
[7]:
fig, axes = plt.subplots(1, 2, figsize=(14, 6))

for i, data in enumerate([num_neighbors, densities]):
    poly = np.poly1d(np.polyfit(cov_basis, data, 1))
    axes[i].tick_params(axis="both", which="both", labelsize=14)
    axes[i].scatter(cov_basis, data)
    x = np.linspace(*axes[i].get_xlim(), 30)
    axes[i].plot(x, poly(x), label="Best fit");
    axes[i].set_xlabel("Covariance", fontsize=16)

axes[0].set_ylabel("Number of neighbors", fontsize=16);
axes[1].set_ylabel("Density", fontsize=16);
plt.show()
_images/examples_module_intros_Density-LocalDensity_10_0.png

As expected, we see that increasing the variance in the number of points centered at a particular reference point decreases the total density at that point. The trend is noisy since we are randomly sampling possible positions, but the general behavior is clear.

RDF: Accumulating g(r) for a Fluid

The freud.density module is intended to compute a variety of quantities that relate spatial distributions of particles with other particles. This example demonstrates the calculation of the radial distribution function \(g(r)\) for a fluid, averaged over multiple frames.

[1]:
import numpy as np
import freud
from util import box_2d_to_points
import matplotlib.pyplot as plt

data_path = "data/phi065"
box_data = np.load("{}/box_data.npy".format(data_path))
pos_data = np.load("{}/pos_data.npy".format(data_path))

def plot_rdf(box_arr, points_arr, prop, rmax=10, dr=0.1, label=None, ax=None):
    """Helper function for plotting RDFs."""
    if ax is None:
        fig, ax = plt.subplots(1, 1, figsize=(12, 8))
        ax.set_title(prop, fontsize=16)
    rdf = freud.density.RDF(rmax, dr)
    for box, points in zip(box_arr, points_arr):
        rdf.accumulate(box, points)
    if label is not None:
        ax.plot(rdf.R, getattr(rdf, prop), label=label)
        ax.legend()
    else:
        ax.plot(rdf.R, getattr(rdf, prop))
    return ax

Here, we show the difference between the RDF of one frame and an accumulated (averaged) RDF from several frames. Including more frames makes the plot smoother.

[2]:
# Compute the RDF for the last frame
box_arr = [box_data[-1].tolist()]
pos_arr = [pos_data[-1]]
ax = plot_rdf(box_arr, pos_arr, 'RDF', dr=0.1, label='One frame')

# Compute the RDF for the last 20 frames
box_arr = [box.tolist() for box in box_data[-20:]]
pos_arr = pos_data[-20:]
ax = plot_rdf(box_arr, pos_arr, 'RDF', dr=0.1, label='Last 20 frames', ax=ax)

plt.show()
_images/examples_module_intros_Density-RDF-AccumulateFluid_3_0.png

The difference between accumulate (which should be called on a series of frames) and compute (meant for a single frame) is more striking for smaller bin sizes, which are statistically noisier.

[3]:
# Compute the RDF for the last frame
box_arr = [box_data[-1].tolist()]
pos_arr = [pos_data[-1]]
ax = plot_rdf(box_arr, pos_arr, 'RDF', dr=0.01, label='One frame')

# Compute the RDF for the last 20 frames
box_arr = [box.tolist() for box in box_data[-20:]]
pos_arr = pos_data[-20:]
ax = plot_rdf(box_arr, pos_arr, 'RDF', dr=0.01, label='Last 20 frames', ax=ax)

plt.show()
_images/examples_module_intros_Density-RDF-AccumulateFluid_5_0.png
RDF: Choosing Bin Widths

The freud.density module is intended to compute a variety of quantities that relate spatial distributions of particles with other particles. This example demonstrates the calculation of the radial distribution function \(g(r)\) using different bin sizes.

[1]:
import numpy as np
import freud
import util
import matplotlib.pyplot as plt
[2]:
# Define some helper plotting functions.
def plot_lattice(box, points, colors=None):
    """Helper function for plotting points on a lattice."""
    fig, ax = plt.subplots(1, 1, figsize=(9, 6))
    box_points = util.box_2d_to_points(box)
    ax.plot(box_points[:, 0], box_points[:, 1], color='k')

    if colors is not None:
        p = ax.scatter(points[:, 0], points[:, 1], c=colors, cmap='plasma')
        plt.colorbar(p)
    else:
        ax.scatter(points[:, 0], points[:, 1])
    return fig, ax

def plot_rdf(box, points, prop, rmax=3, drs=[0.15, 0.04, 0.001]):
    """Helper function for plotting RDFs."""
    fig, axes = plt.subplots(1, len(drs), figsize=(16, 3))
    for i, dr in enumerate(drs):
        rdf = freud.density.RDF(rmax, dr)
        rdf.compute(box, points)
        axes[i].plot(rdf.R, getattr(rdf, prop))
        axes[i].set_title("Bin width: {:.3f}".format(dr), fontsize=16)
    return fig, ax

To start, we construct and visualize a set of points sitting on a simple square lattice.

[3]:
box, points = util.make_square(5, 5)
fig, ax = plot_lattice(box, points)
plt.show()
_images/examples_module_intros_Density-RDF-BinWidth_4_0.png

If we try to compute the RDF directly from this, we will get something rather uninteresting since we have a perfect crystal. Indeed, we will observe that as we bin more and more finely, we approach the true behavior of the RDF for perfect crystals, which is a simple delta function.

[4]:
fig, ax = plot_rdf(box, points, 'RDF')
plt.show()
_images/examples_module_intros_Density-RDF-BinWidth_6_0.png

In these RDFs, we see two sharply defined peaks, with the first corresponding to the nearest neighbors on the lattice (which are all at a distance 2 from each other), and the second, smaller peak caused by the particles on the diagonal (which sit at distance \(\sqrt{2^2+2^2} \approx 2.83\).

However, in more realistic systems, we expect that the lattice will not be perfectly formed. In this case, the RDF will exhibit more features. To demonstrate this fact, we reconstruct the square lattice of points from above, but we now introduce some noise into the system.

[5]:
box, points = util.make_square(10, 10, noise=0.15)
fig, ax = plot_lattice(box, box.wrap(points), np.linalg.norm(points-np.round(points), axis=1))
ax.set_title("Colored by distance from lattice sites", fontsize=16);
plt.show()
_images/examples_module_intros_Density-RDF-BinWidth_9_0.png
[6]:
fig, ax = plot_rdf(box, points, 'RDF')
plt.show()
_images/examples_module_intros_Density-RDF-BinWidth_10_0.png

In this RDF, we see the same rough features as we saw with the perfect lattice. However, the signal is much noisier, and in fact we see that increasing the number of bins essentially leads to overfitting of the data. As a result, we have to be careful with how we choose to bin our data when constructing the RDF object.

An alternative route for avoiding this problem can be using the cumulative RDF instead. The relationship between the cumulative RDF and the RDF is akin to that between a cumulative density and a probability density function, providing a measure of the total density of particles experienced up to some distance rather than the value at that distance. Just as a CDF can help avoid certain mistakes common to plotting a PDF, plotting the cumulative RDF may be helpful in some cases. Here, we see that decreasing the bin size slightly alters the features of the plot, but only in very minor way (i.e. decreasing the smoothness of the line due to small jitters).

[7]:
fig, ax = plot_rdf(box, points, 'n_r')
plt.show()
_images/examples_module_intros_Density-RDF-BinWidth_13_0.png
AngularSeparation

The freud.environment module analyzes the local environments of particles. The freud.environment.AngularSeparation class enables direct measurement of the relative orientations of particles.

[1]:
import freud
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['axes.titlepad'] = 20
from mpl_toolkits.mplot3d import Axes3D
import util

In order to work with orientations in freud, we need to do some math with quaternions. If you are unfamiliar with quaternions, you can read more about their definition and how they can be used to represent rotations. For the purpose of this tutorial, just consider them as 4D vectors, and know that the set of normalized (i.e. unit norm) 4D vectors can be used to represent rotations in 3D. In fact, there is a 1-1 mapping between normalized quaternions and 3x3 rotation matrices. Quaternions are more computationally convenient, however, because they only require storing 4 numbers rather than 9, and they can be much more easily chained together. For our purposes, you can largely ignore the contents of the next cell, other than to note that this is how we perform rotations of vectors using quaternions instead of matrices.

[2]:
# These functions are adapted from the rowan quaternion library.
# See rowan.readthedocs.io for more information.
def quat_multiply(qi, qj):
    """Multiply two sets of quaternions."""
    output = np.empty(np.broadcast(qi, qj).shape)

    output[..., 0] = qi[..., 0] * qj[..., 0] - \
        np.sum(qi[..., 1:] * qj[..., 1:], axis=-1)
    output[..., 1:] = (qi[..., 0, np.newaxis] * qj[..., 1:] +
                       qj[..., 0, np.newaxis] * qi[..., 1:] +
                       np.cross(qi[..., 1:], qj[..., 1:]))
    return output

def quat_rotate(q, v):
    """Rotate a vector by a quaternion."""
    v = np.array([0, *v])

    q_conj = q.copy()
    q_conj[..., 1:] *= -1

    return quat_multiply(q, quat_multiply(v, q_conj))[..., 1:]

def quat_to_angle(q):
    """Get rotation angles of quaternions."""
    norms = np.linalg.norm(q[..., 1:], axis=-1)
    return 2.0 * np.arctan2(norms, q[..., 0])
Neighbor Angles

One usage of the AngularSeparation class is to compute angles between neighboring particles. To show how this works, we generate a simple configuration of particles with random orientations associated with each point.

[3]:
box, positions = util.make_sc(5, 5, 5)
v = 0.05

# Quaternions can be simply sampled as 4-vectors.
# Note that these samples are not uniformly distributed rotations,
# but that is not important for our current applications.
np.random.seed(0)
ref_orientations = np.random.multivariate_normal(mean=[1, 0, 0, 0], cov=v*np.eye(4), size=positions.shape[0])
orientations = np.random.multivariate_normal(mean=[1, 0, 0, 0], cov=v*np.eye(4), size=positions.shape[0])

# However, they must be normalized: only unit quaternions represent rotations.
ref_orientations /= np.linalg.norm(ref_orientations, axis=1)[:, np.newaxis]
orientations /= np.linalg.norm(orientations, axis=1)[:, np.newaxis]
[4]:
# To show orientations, we use arrows rotated by the quaternions.
ref_arrowheads = quat_rotate(ref_orientations, np.array([1, 0, 0]))
arrowheads = quat_rotate(orientations, np.array([1, 0, 0]))

fig = plt.figure(figsize=(12, 6))
ref_ax = fig.add_subplot(121, projection='3d')
ax = fig.add_subplot(122, projection='3d')
ref_ax.quiver3D(positions[:, 0], positions[:, 1], positions[:, 2],
                ref_arrowheads[:, 0], ref_arrowheads[:, 1], ref_arrowheads[:, 2])
ax.quiver3D(positions[:, 0], positions[:, 1], positions[:, 2],
            arrowheads[:, 0], arrowheads[:, 1], arrowheads[:, 2])
ref_ax.set_title("Reference orientations", fontsize=16);
ax.set_title("Orientations", fontsize=16);
plt.show()
_images/examples_module_intros_Environment-AngularSeparation_6_0.png

We can now use the AngularSeparation class to compare the orientations in these two systems.

[5]:
num_neighbors=12
r_max = 1.8

# For simplicity, we'll assume that our "particles" are completely
# asymmetric, i.e. there are no rotations that map the particle
# back onto itself. If we had a regular polyhedron, then we would
# want to specify all the quaternions that rotate that polyhedron
# onto itself.
equiv_ors = np.array([[1, 0, 0, 0]])
ang_sep = freud.environment.AngularSeparation(r_max, num_neighbors)
ang_sep.computeNeighbor(box, ref_orientations, orientations,
                        positions, positions, equiv_ors)

#convert angles from radians to degrees
neighbor_angles = np.rad2deg(ang_sep.neighbor_angles)
neighbor_angles
[5]:
array([100.32801 ,  90.07115 ,  60.706676, ...,  42.66844 ,  61.745235,
        52.767185], dtype=float32)
Global Angles

Alternatively, the AngularSeparation class can also be used to compute the orientation of all points in the system relative to some global set of orientations. In this case, we simply provide a set of global quaternions that we want to consider. For simplicity, let’s consider \(180^\circ\) rotations about each of the coordinate axes, which have very simple quaternion representations.

[6]:
global_orientations = np.array([[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])
ang_sep.computeGlobal(ref_orientations, global_orientations, equiv_ors)
global_angles = np.rad2deg(ang_sep.global_angles)

As a simple check, we can ensure that for the identity quaternion \((1, 0, 0, 0)\), which performs a \(0^\circ\) rotation, the angles between the reference orientations and that quaternion are equal to the original angles of rotation of those quaternions (i.e. how much those orientations were already rotated relative to the identity).

[7]:
np.allclose(global_angles[0, :], np.rad2deg(quat_to_angle(ref_orientations)))
[7]:
True
BondOrder
Computing Bond Order Diagrams

The freud.environment module analyzes the local environments of particles. In this example, the freud.environment.BondOrder class is used to plot the bond order diagram (BOD) of a system of particles.

[1]:
import numpy as np
import freud
import matplotlib.pyplot as plt
import matplotlib
from mpl_toolkits.mplot3d import Axes3D
import util
Setup

Our sample data will be taken from an face-centered cubic (FCC) structure. The array of points is rather large, so that the plots are smooth. Smaller systems may need to use accumulate to gather data from multiple frames in order to smooth the resulting array’s statistics.

[2]:
box, points = util.make_fcc(nx=40, ny=40, nz=40, noise=0.1)
orientations = np.array([[1, 0, 0, 0]]*len(points))

Now we create a BondOrder compute object and create some arrays useful for plotting.

[3]:
rmax = 3 # This is intentionally large
n_bins_theta = 100
n_bins_phi = 100
k = 0 # This parameter is ignored
n = 12 # Chosen for FCC structure
bod = freud.environment.BondOrder(rmax=rmax, k=k, n=n, n_bins_t=n_bins_theta, n_bins_p=n_bins_phi)

phi = np.linspace(0, np.pi, n_bins_phi)
theta = np.linspace(0, 2*np.pi, n_bins_theta)
phi, theta = np.meshgrid(phi, theta)
x = np.sin(phi) * np.cos(theta)
y = np.sin(phi) * np.sin(theta)
z = np.cos(phi)
Computing the Bond Order Diagram

Next, we use the compute method and the bond_order property to return the array. Note that we use freud’s method chaining here, where a compute method returns the compute object.

[4]:
bod_array = bod.compute(box=box, ref_points=points, ref_orientations=orientations, points=points, orientations=orientations).bond_order
bod_array = np.clip(bod_array, 0, np.percentile(bod_array, 99)) # This cleans up bad bins for plotting
plt.matshow(bod_array)
plt.show()
_images/examples_module_intros_Environment-BondOrder_7_0.png
Plotting on a sphere

This code shows the bond order diagram on a sphere as the sphere is rotated. The code takes a few seconds to run, so be patient.

[5]:
fig = plt.figure(figsize=(12, 8))
for plot_num in range(6):
    ax = fig.add_subplot(231 + plot_num, projection='3d')
    ax.plot_surface(x, y, z, rstride=1, cstride=1, shade=False,
                    facecolors=matplotlib.cm.viridis(bod_array.T / np.max(bod_array)))
    ax.set_xlim(-1, 1)
    ax.set_ylim(-1, 1)
    ax.set_zlim(-1, 1)
    ax.set_axis_off()
    # View angles in degrees
    view_angle = 0, plot_num*15
    ax.view_init(*view_angle)
plt.show()
_images/examples_module_intros_Environment-BondOrder_9_0.png
Using Neighbor Lists

We can also construct neighbor lists and use those to determine bonds instead of the rmax and n values in the BondOrder constructor. For example, we can filter for a range of bond lengths. Below, we only consider neighbors between \(r_{min} = 2.5\) and \(r_{max} = 3\) and plot the resulting bond order diagram.

[6]:
lc = freud.locality.LinkCell(box=box, cell_width=3)
nlist = lc.compute(box, points, points).nlist
nlist.filter_r(box, points, points, rmax=3, rmin=2.5)
bod_array = bod.compute(box=box, ref_points=points, ref_orientations=orientations,
                        points=points, orientations=orientations, nlist=nlist).bond_order
bod_array = np.clip(bod_array, 0, np.percentile(bod_array, 99)) # This cleans up bad bins for plotting
plt.matshow(bod_array)
plt.show()
_images/examples_module_intros_Environment-BondOrder_11_0.png
LocalDescriptors: Steinhardt Order Parameters

The freud.environment module analyzes the local environments of particles. The freud.environment.LocalDescriptors class is a useful tool for analyzing identifying crystal structures in a rotationally invariant manner using local particle environments. The primary purpose of this class is to compute spherical harmonics between neighboring particles in a way that orients particles correctly relative to their local environment, ensuring that global orientational shifts do not change the output.

[1]:
import freud
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import util
Computing Spherical Harmonics

To demonstrate the basic application of the class, let’s compute the spherical harmonics between neighboring particles. For simplicity, we consider points on a simple cubic lattice.

[2]:
box, points = util.make_sc(5, 5, 5)
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(points[:, 0], points[:, 1], points[:, 2])
ax.set_title("Simple cubic crystal", fontsize=16);
plt.show()
_images/examples_module_intros_Environment-LocalDescriptors-Steinhardt_4_0.png

Now, let’s use the class to compute an array of spherical harmonics for the system. The harmonics are computed for each bond, where a bond is defined by a pair of particles that are determined to lie within each others’ nearest neighbor shells based on a standard neighbor list search. The number of bonds and spherical harmonics to calculate is configurable.

[3]:
num_neighbors = 6
l_max = 12
r_max = 2

# In order to be able to access information on which particles are bonded
# to which ones, we precompute the neighborlist
nn = freud.locality.NearestNeighbors(r_max, num_neighbors)
nn.compute(box, points)
nl = nn.nlist
ld = freud.environment.LocalDescriptors(num_neighbors, l_max, r_max)
ld.compute(box, num_neighbors, points, mode='global', nlist=nl);
Accessing the Data

The resulting spherical harmonic array has a shape corresponding to the number of neighbors. We can now extract the spherical harmonics corresponding to a particular \((l, m)\) pair using the ordering used by the LocalDescriptors class: increasing values of \(l\), and for each \(l\), the nonnegative \(m\) values followed by the negative values.

[4]:
sph_raw = np.mean(ld.sph, axis=0)
count = 0
sph = np.zeros((l_max+1, l_max+1), dtype=np.complex128)
for l in range(l_max+1):
    for m in range(l+1):
        sph[l, m] = sph_raw[count]
        count += 1
    for m in range(-l, 0):
        sph[l, m] = sph_raw[count]
        count += 1
Using Spherical Harmonics to Compute Steinhardt Order Parameters

The raw per bond spherical harmonics are not typically useful quantities on their own. However, they can be used to perform sophisticated crystal structure analyses with different methods; for example, the pythia library uses machine learning to find patterns in the spherical harmonics computed by this class. In this notebook, we’ll use the quantities for a more classical application: the computation of Steinhardt order parameters. The order parameters \(Q_l\) provide a rotationally invariant measure of the system that can for some structures, provide a unique identifying fingerprint. They are a particularly useful measure for various simple cubic structures such as structures with underlying simple cubic, BCC, or FCC lattices. The freud library actually provides additional classes to efficiently calculate these order parameters directly, but they also provide a reasonable demonstration here.

For more information on Steinhardt order parameters, see the original paper or the freud.order.LocalQl documentation.

[5]:
def get_Ql(p, descriptors, nlist):
    """Given a set of points and a LocalDescriptors object (and the underlying neighborlist,
    compute the per-particle Steinhardt order parameter for all :math:`l` values up to the
    maximum quantum number used in the computation of the descriptors."""
    Qbar_lm = np.zeros((p.shape[0], descriptors.sph.shape[1]), dtype=np.complex128)
    num_neighbors = descriptors.sph.shape[0]/p.shape[0]
    for i in range(p.shape[0]):
        indices = nlist.index_i == i
        Qbar_lm[i, :] = np.sum(descriptors.sph[indices, :], axis=0)/num_neighbors

    Ql = np.zeros((Qbar_lm.shape[0], descriptors.l_max+1))
    for i in range(Ql.shape[0]):
        for l in range(Ql.shape[1]):
            for k in range(l**2, (l+1)**2):
                Ql[i, l] += np.absolute(Qbar_lm[i, k])**2
            Ql[i, l] = np.sqrt(4*np.pi/(2*l + 1) * Ql[i, l])

    return Ql

Ql = get_Ql(points, ld, nl)

Since freud provides the ability to calculate these parameter as well, we can directly check that our answers are correct. *Note: More information on the LocalQl class can be found in the documentation or in the LocalQl example.

[6]:
L = 6
ql = freud.order.LocalQl(box, r_max*2, L, 0)
ql.compute(points, nl)
if np.allclose(ql.Ql, Ql[:, L]):
    print("Our manual Ql calculation matches the Steinhardt OP class!")
Our manual Ql calculation matches the Steinhardt OP class!

For a brief demonstration of why the Steinhardt order parameters can be useful, let’s look at the result of thermalizing our points and recomputing this measure.

[7]:
variances = [0.02, 0.5, 1]
point_arrays = []
nns = []
nls = []
for v in variances:
    point_arrays.append(
        points + np.random.multivariate_normal(
            mean=(0, 0, 0), cov=v*np.eye(3), size=points.shape[0]))
    nns.append(freud.locality.NearestNeighbors(r_max, num_neighbors))
    nns[-1].compute(box, point_arrays[-1])
    nls.append(nns[-1].nlist)
[8]:
box, points = util.make_sc(5, 5, 5)
fig = plt.figure(figsize=(14, 6))
axes = []
plot_str = "1" + str(len(variances)) + "{}"
for i, v in enumerate(variances):
    axes.append(fig.add_subplot(plot_str.format(i+1), projection='3d'))
    axes[-1].scatter(point_arrays[i][:, 0], point_arrays[i][:, 1], point_arrays[i][:, 2])
    axes[-1].set_title("Variance = {}".format(v), fontsize=16);
plt.show()
_images/examples_module_intros_Environment-LocalDescriptors-Steinhardt_17_0.png

If we recompute the Steinhardt OP for each of these data sets, we see that adding noise has the effect of smoothing the order parameter such that the peak we observed for the perfect crystal is no longer observable.

[9]:
lds = []
Qls = []
for i, v in enumerate(variances):
    lds.append(freud.environment.LocalDescriptors(num_neighbors, l_max, r_max))
    lds[-1].compute(box, num_neighbors, point_arrays[i], mode='global', nlist=nls[i]);
    Qls.append(get_Ql(point_arrays[i], lds[-1], nls[i]))
[10]:
fig, ax = plt.subplots()
for i, Q in enumerate(Qls):
    lim_out = ax.hist(Q[:, L], label="Variance = {}".format(variances[i]), density=True)
    if i == 0:
        # Can choose any element, all are identical in the reference case
        ax.vlines(Ql[:, L][0], 0, np.max(lim_out[0]), label='Reference')
ax.set_title("Histogram of $Q_l$ values", fontsize=16)
ax.set_ylabel("Frequency", fontsize=14)
ax.set_xlabel("$Q_l$", fontsize=14)
ax.legend(fontsize=14);
plt.show()
_images/examples_module_intros_Environment-LocalDescriptors-Steinhardt_20_0.png

This type of identification process is what the LocalDescriptors data outputs may be used for. In the case of Steinhardt OPs, it provides a simple fingerprint for comparing thermalized systems to a known ideal structure to measure their similarity.

For reference, we can also check these values against the LocalQl class again

[11]:
for i, pa in enumerate(point_arrays):
    ql = freud.order.LocalQl(box, r_max*2, L, 0)
    ql.compute(pa, nls[i])
    if np.allclose(ql.Ql, Qls[i][:, L]):
        print("Our manual Ql calculation matches the Steinhardt OP class!")
Our manual Ql calculation matches the Steinhardt OP class!
Our manual Ql calculation matches the Steinhardt OP class!
Our manual Ql calculation matches the Steinhardt OP class!
MatchEnv

The freud.environment.MatchEnv class finds and clusters local environments, as determined by the vectors pointing to neighbor particles. Neighbors can be defined by a cutoff distance or a number of nearest-neighbors, and the resulting freud.locality.NeighborList is used to enumerate a set of vectors, defining an “environment.” These environments are compared with the environments of neighboring particles to form spatial clusters, which usually correspond to grains, droplets, or crystalline domains of a system. MatchEnv has several parameters that alter its behavior, please see the documentation or helper functions below for descriptions of these parameters.

In this example, we cluster the local environments of hexagons. Clusters with 5 or fewer particles are colored dark gray.

Simulation data courtesy of Shannon Moran, sample code courtesy of Erin Teich.

[1]:
import numpy as np
import freud
from collections import Counter
import matplotlib.pyplot as plt
from util import box_2d_to_points

def get_cluster_arr(box, pos, rcut, num_neigh, threshold, hard_r=False,
                    registration=False, global_search=False):
    """Computes clusters of particles' local environments.

    Args:
        rcut (float):
            Cutoff radius for particles' neighbors.
        num_neigh (int):
            Number of neighbors to consider in every particle's local environment.
        threshold (float):
            Maximum magnitude of the vector difference between two vectors,
            below which we call them matching.
        hard_r (bool):
            If True, add all particles that fall within the threshold of
            rcut to the environment.
        global_search (bool):
            If True, do an exhaustive search wherein the environments of
            every single pair of particles in the simulation are compared.
            If False, only compare the environments of neighboring particles.
        registration (bool):
            Controls whether we first use brute force registration to
            orient the second set of vectors such that it minimizes the
            RMSD between the two sets.

    Returns:
        tuple(np.ndarray, dict): array of cluster indices for every particle
        and a dictionary mapping from cluster_index keys to vector_array)
        pairs giving all vectors associated with each environment.
    """
    # Perform the env-matching calcuation
    match = freud.environment.MatchEnv(box, rcut, num_neigh)
    match.cluster(pos, threshold, hard_r=hard_r,
                  registration=registration, global_search=global_search)
    # Get all clusters. This returns an array in which every
    # particle is indexed by the cluster that it belongs to.
    cluster_envs = {}
    # Get the sets of vectors that correspond to all clusters.
    for cluster_ind in match.clusters:
        if cluster_ind not in cluster_envs:
            cluster_envs[cluster_ind] = np.copy(match.getEnvironment(cluster_ind))

    return np.copy(match.clusters), cluster_envs

def color_by_clust(cluster_index_arr, no_color_thresh=1,
                   no_color='#333333', cmap=plt.get_cmap('viridis')):
    """Takes a cluster_index_array for every particle and returns a
    dictionary of (cluster index, hexcolor) color pairs.

    Args:
        cluster_index_arr (numpy.ndarray):
            The array of cluster indices, one per particle.
        no_color_thresh (int):
            Clusters with this number of particles or fewer will be
            colored with no_color.
        no_color (color):
            What we color particles whose cluster size is below no_color_thresh.
        cmap (color map):
            The color map we use to color all particles whose
            cluster size is above no_color_thresh.
    """
    # Count to find most common clusters
    cluster_counts = Counter(cluster_index_arr)
    # Re-label the cluster indices by size
    color_count = 0
    color_dict = {cluster[0]: counter for cluster, counter in
                  zip(cluster_counts.most_common(),
                      range(len(cluster_counts)))}

    # Don't show colors for clusters below the threshold
    for cluster_id in cluster_counts:
        if cluster_counts[cluster_id] <= no_color_thresh:
            color_dict[cluster_id] = -1
    OP_arr = np.linspace(0.0, 1.0, max(color_dict.values())+1)

    # Get hex colors for all clusters of size greater than no_color_thresh
    for old_cluster_index, new_cluster_index in color_dict.items():
        if new_cluster_index == -1:
            color_dict[old_cluster_index] = no_color
        else:
            color_dict[old_cluster_index] = cmap(OP_arr[new_cluster_index])

    return color_dict

We load the simulation data and call the analysis functions defined above. Notice that we use 6 nearest neighbors, since our system is made of hexagons that tend to cluster with 6 neighbors.

[2]:
ex_data = np.load('data/MatchEnv_Hexagons.npz')
box = ex_data['box']
positions = ex_data['positions']
orientations = ex_data['orientations']

cluster_index_arr, cluster_envs = get_cluster_arr(box, positions, rcut=5.0, num_neigh=6,
                                                  threshold=0.2, hard_r=False,
                                                  registration=False, global_search=False)
color_dict = color_by_clust(cluster_index_arr, no_color_thresh=5)
colors = [color_dict[i] for i in cluster_index_arr]

Below, we plot the resulting clusters. The colors correspond to the cluster size.

[3]:
plt.figure(figsize=(12, 12), facecolor='white')
box_points = box_2d_to_points(freud.box.Box.from_box(box))
plt.plot(box_points[:, 0], box_points[:, 1], c='black')
plt.scatter(positions[:, 0], positions[:, 1], c=colors, s=20)
ax = plt.gca()
ax.set_xlim((min(box_points[:, 0]), max(box_points[:, 0])))
ax.set_ylim((min(box_points[:, 1]), max(box_points[:, 1])))
ax.set_aspect('equal')
plt.title('Clusters Colored by Particle Local Environment')
plt.show()
_images/examples_module_intros_Environment-MatchEnvCluster_5_0.png
Interface
Locating Particles on Interfacial Boundaries

The freud.interface module compares the distances between two sets of points to determine the interfacial particles.

[1]:
import freud
import numpy as np
import matplotlib.pyplot as plt

To make a pretend data set, we create a large number of blue (-1) particles on a square grid. Then we place grain centers on a larger grid and draw grain radii from a normal distribution. We color the particles red (+1) if their distance from a grain center is less than the grain radius.

[2]:
# Set up the system
box = freud.box.Box.square(L=10)
dx = 0.15
num_grains = 4
dg = box.Lx/num_grains
points = np.array([[i, j, 0]
                   for j in np.arange(-box.Ly/2, box.Ly/2, dx)
                   for i in np.arange(-box.Lx/2, box.Lx/2, dx)])
values = np.array([-1]*points.shape[0])
centroids = [[i*dg + 0.5*dg, j*dg + 0.5*dg, 0]
             for i in range(num_grains) for j in range(num_grains)]
grain_radii = np.abs(np.random.normal(size=num_grains**2, loc=0.25*dg, scale=0.05*dg))
for center, radius in zip(centroids, grain_radii):
    lc = freud.locality.LinkCell(box, radius).compute(box, points, [center])
    for i in lc.nlist.index_i:
        values[i] = 1

blue_points = points[values < 0]
red_points = points[values > 0]

plt.figure(figsize=(8, 8))
plt.scatter(blue_points[:, 0],
            blue_points[:, 1],
            marker='o', color='blue', s=25)
plt.scatter(red_points[:, 0],
            red_points[:, 1],
            marker='o', color='red', s=25)
plt.show()
_images/examples_module_intros_Interface-Interface_3_0.png

This system is phase-separated because the red particles are generally near one another, and so are the blue particles.

We can use freud.interface.InterfaceMeasure to label the particles on either side of the red-blue boundary. The class can tell us how many points are on either side of the interface:

[3]:
iface = freud.interface.InterfaceMeasure(r_cut=0.2)
iface.compute(box=box, ref_points=blue_points, points=red_points)

print('There are', iface.ref_point_count, 'reference (blue) points on the interface.')
print('There are', iface.point_count, '(red) points on the interface.')
There are 410 reference (blue) points on the interface.
There are 346 (red) points on the interface.

Now we can plot the particles on the interface. We color the outside of the interface cyan and the inside of the interface black.

[4]:
plt.figure(figsize=(8, 8))

plt.scatter(blue_points[:, 0],
            blue_points[:, 1],
            marker='o', color='blue', s=25)
plt.scatter(red_points[:, 0],
            red_points[:, 1],
            marker='o', color='red', s=25)

plt.scatter(blue_points[iface.ref_point_ids, 0],
            blue_points[iface.ref_point_ids, 1],
            marker='o', color='cyan', s=25)
plt.scatter(red_points[iface.point_ids, 0],
            red_points[iface.point_ids, 1],
            marker='o', color='black', s=25)

plt.show()
_images/examples_module_intros_Interface-Interface_7_0.png
Hexatic Order Parameter

The hexatic order parameter measures how closely the local environment around a particle resembles perfect \(k\)-atic symmetry, e.g. how closely the environment resembles hexagonal/hexatic symmetry for \(k=6\). The order parameter is given by:

\[\psi_k \left( i \right) = \frac{1}{n} \sum \limits_j^n e^{k i \theta_{ij}}\]

where \(\theta_{ij}\) is the angle between the vector \(\vec{r}_{ij}\) and \(\left(1, 0\right)\).

The pseudocode is given below:

for each particle i:
    neighbors = nearestNeighbors(i, n):
    for each particle j in neighbors:
        r_ij = position[j] - position[i]
        theta_ij = arctan2(r_ij.y, r_ij.x)
        psi_array[i] += exp(complex(0,k*theta_ij))

The data sets used in this example are a system of hard hexagons, simulated in the NVT thermodynamic ensemble in HOOMD-blue, for a dense fluid of hexagons at packing fraction \(\phi = 0.65\) and solids at packing fractions \(\phi = 0.75, 0.85\).

[1]:
import numpy as np
import freud
from bokeh.io import output_notebook
from bokeh.plotting import figure, show
import util
output_notebook()
Loading BokehJS ...
[2]:
def plot_hex_order_param(data_path, title):
    # Create hexatic object
    hex_order = freud.order.HexOrderParameter(rmax=1.2, k=6, n=6)

    # Load the data
    box_data = np.load("{}/box_data.npy".format(data_path))
    pos_data = np.load("{}/pos_data.npy".format(data_path))
    quat_data = np.load("{}/quat_data.npy".format(data_path))

    # Grab data from last frame
    l_box = box_data[-1].tolist()
    l_pos = pos_data[-1]
    l_quat = quat_data[-1]
    l_ang = 2*np.arctan2(l_quat[:, 3], l_quat[:, 0])

    # Compute hexatic order for 6 nearest neighbors
    hex_order.compute(l_box, l_pos)
    psi_k = hex_order.psi
    avg_psi_k = np.mean(psi_k)

    # Create hexagon vertices
    verts = util.make_polygon(sides=6, radius=0.6204)
    # Create array of transformed positions
    patches = util.local_to_global(verts, l_pos[:, :2], l_ang)
    # Create an array of angles relative to the average
    relative_angles = np.angle(psi_k) - np.angle(avg_psi_k)
    # Plot in bokeh
    p = figure(title=title)
    p.patches(xs=patches[:, :, 0].tolist(), ys=patches[:, :, 1].tolist(),
              fill_color=[util.cubeellipse(x) for x in relative_angles],
              line_color="black")
    util.default_bokeh(p)
    show(p)
[3]:
plot_hex_order_param('data/phi065', 'Hexatic Order Parameter, 0.65 density')

As the density increases to \(\phi=0.75\), the shapes are forced to align more closely so that they may tile space effectively.

[4]:
plot_hex_order_param('data/phi075', 'Hexatic Order Parameter, 0.75 density')

As the density increases to \(\phi=0.85\), the alignment becomes even stronger and defects are no longer visible.

[5]:
plot_hex_order_param('data/phi085', 'Hexatic Order Parameter, 0.85 density')
NematicOrderParameter

The freud.order module provids the tools to calculate various order parameters that can be used to identify phase transitions. This notebook demonstrates the nematic order parameter, which can be used to identify systems with strong orientational ordering but no translational ordering. For this example, we’ll start with a set of random positions in a 3D system, each with a fixed, assigned orientation. Then, we will show how deviations from these orientations are exhibited in the order parameter.

[1]:
import freud
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import util

In order to work with orientations in freud, we need to do some math with quaternions. If you are unfamiliar with quaternions, you can read more about their definition and how they can be used to represent rotations. For the purpose of this tutorial, just consider them as 4D vectors, and know that the set of normalized (i.e. unit norm) 4D vectors can be used to represent rotations in 3D. In fact, there is a one-to-one mapping between normalized quaternions and 3x3 rotation matrices. Quaternions are more computationally convenient, however, because they only require storing 4 numbers rather than 9, and they can be much more easily chained together. For our purposes, you can largely ignore the contents of the next cell, other than to note that this is how we perform rotations of vectors using quaternions instead of matrices.

[2]:
# These functions are adapted from the rowan quaternion library.
# See rowan.readthedocs.io for more information.
def quat_multiply(qi, qj):
    """Multiply two sets of quaternions."""
    output = np.empty(np.broadcast(qi, qj).shape)

    output[..., 0] = qi[..., 0] * qj[..., 0] - \
        np.sum(qi[..., 1:] * qj[..., 1:], axis=-1)
    output[..., 1:] = (qi[..., 0, np.newaxis] * qj[..., 1:] +
                       qj[..., 0, np.newaxis] * qi[..., 1:] +
                       np.cross(qi[..., 1:], qj[..., 1:]))
    return output

def quat_rotate(q, v):
    """Rotate a vector by a quaternion."""
    v = np.array([0, *v])

    q_conj = q.copy()
    q_conj[..., 1:] *= -1

    return quat_multiply(q, quat_multiply(v, q_conj))[..., 1:]
[3]:
# Random positions are fine for this. Order is measured
# in terms of similarity of orientations, not positions.
L = 10
positions = np.random.rand(100, 3)*L - L/2
box = freud.box.Box.cube(L=L)
orientations = np.zeros((100, 4))
orientations[:, 0] = 1  # Quaternion (1, 0, 0, 0) is default orientation
[4]:
# To show orientations, we use arrows rotated by the quaternions.
arrowheads = quat_rotate(orientations, np.array([1, 0, 0]))

fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.quiver3D(positions[:, 0], positions[:, 1], positions[:, 2],
            arrowheads[:, 0], arrowheads[:, 1], arrowheads[:, 2])
ax.set_title("Orientations", fontsize=16);
_images/examples_module_intros_Order-NematicOrderParameter_5_0.png

The nematic order parameter provides a measure of how much of the system is aligned with respect to some provided reference vector. As a result, we can now compute the order parameter for a few simple cases. Since our original system is oriented along the x-axis, we can immediately test for that, as well as orientation along any of the other coordinate axes.

[5]:
nop = freud.order.NematicOrderParameter([1, 0, 0])
nop.compute(orientations)
print("The value of the order parameter is {}.".format(nop.nematic_order_parameter))
The value of the order parameter is 1.0.

In general, the nematic order parameter is defined as the eigenvalue corresponding to the largest eigenvector of the nematic tensor, which is also computed by this class and provides an average over the orientations of all particles in the system. As a result, we can also look at the intermediate results of our calculation and see how they are related. To do so, let’s consider a more interesting system with random orientations.

[6]:
# Quaternions can be simply sampled as 4-vectors.
# Note that these samples are not uniformly distributed rotations,
# but that is not important for our current applications.
# However, we must ensure that the quaternions are normalized:
# only unit quaternions represent rotations.
np.random.seed(0)
v = 0.05
orientations = np.random.multivariate_normal(mean=[1, 0, 0, 0], cov=v*np.eye(4), size=positions.shape[0])
orientations /= np.linalg.norm(orientations, axis=1)[:, np.newaxis]
[7]:
# To show orientations, we use arrows rotated by the quaternions.
arrowheads = quat_rotate(orientations, np.array([1, 0, 0]))

fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.quiver3D(positions[:, 0], positions[:, 1], positions[:, 2],
            arrowheads[:, 0], arrowheads[:, 1], arrowheads[:, 2])
ax.set_title("Orientations", fontsize=16);
_images/examples_module_intros_Order-NematicOrderParameter_10_0.png

First, we see that for this nontrivial system the order parameter now depends on the choice of director.

[8]:
axes = [[1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 1]]
for ax in axes:
    nop = freud.order.NematicOrderParameter(ax)
    nop.compute(orientations)
    print("For axis {}, the value of the order parameter is {:0.3f}.".format(ax, nop.nematic_order_parameter))
For axis [1, 0, 0], the value of the order parameter is 0.608.
For axis [0, 1, 0], the value of the order parameter is 0.564.
For axis [0, 0, 1], the value of the order parameter is 0.606.
For axis [1, 1, 0], the value of the order parameter is 0.611.
For axis [1, 0, 1], the value of the order parameter is 0.616.
For axis [0, 1, 1], the value of the order parameter is 0.577.
For axis [1, 1, 1], the value of the order parameter is 0.608.

Furthermore, increasing the amount of variance in the orientations depresses the value of the order parameter even further.

[9]:
v = 0.5
orientations = np.random.multivariate_normal(mean=[1, 0, 0, 0], cov=v*np.eye(4), size=positions.shape[0])
orientations /= np.linalg.norm(orientations, axis=1)[:, np.newaxis]

arrowheads = quat_rotate(orientations, np.array([1, 0, 0]))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.quiver3D(positions[:, 0], positions[:, 1], positions[:, 2],
            arrowheads[:, 0], arrowheads[:, 1], arrowheads[:, 2])
ax.set_title("Orientations", fontsize=16);

axes = [[1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 1]]
for ax in axes:
    nop = freud.order.NematicOrderParameter(ax)
    nop.compute(orientations)
    print("For axis {}, the value of the order parameter is {:0.3f}.".format(ax, nop.nematic_order_parameter))

For axis [1, 0, 0], the value of the order parameter is 0.047.
For axis [0, 1, 0], the value of the order parameter is 0.075.
For axis [0, 0, 1], the value of the order parameter is 0.062.
For axis [1, 1, 0], the value of the order parameter is 0.090.
For axis [1, 0, 1], the value of the order parameter is 0.071.
For axis [0, 1, 1], the value of the order parameter is 0.069.
For axis [1, 1, 1], the value of the order parameter is 0.122.
_images/examples_module_intros_Order-NematicOrderParameter_14_1.png

Finally, we can look at the per-particle quantities and build them up to get the actual value of the order parameter.

[10]:
# The per-particle values averaged give the nematic tensor
print(np.allclose(np.mean(nop.particle_tensor, axis=0), nop.nematic_tensor))
print("The nematic tensor:")
print(nop.nematic_tensor)

eig = np.linalg.eig(nop.nematic_tensor)
print("The eigenvalues of the nematic tensor:")
print(eig[0])
print("The eigenvectors of the nematic tensor:")
print(eig[1])

# The largest eigenvalue
print("The largest eigenvalue, {:0.3f}, is equal to the order parameter {:0.3f}.".format(
    np.max(eig[0]), nop.nematic_order_parameter))
True
The nematic tensor:
[[-0.08164976 -0.00958258  0.05754685]
 [-0.00958258  0.02508484  0.07119219]
 [ 0.05754685  0.07119219  0.05656487]]
The eigenvalues of the nematic tensor:
[-0.11201976 -0.00962728  0.121647  ]
The eigenvectors of the nematic tensor:
[[-0.8684619  -0.45401227  0.19911501]
 [-0.27491596  0.7752717   0.5686607 ]
 [ 0.41254717 -0.43912026  0.7981092 ]]
The largest eigenvalue, 0.122, is equal to the order parameter 0.122.
LocalQl, LocalWl

The freud.order module provids the tools to calculate various order parameters that can be used to identify phase transitions. In the context of crystalline systems, some of the best known order parameters are the Steinhardt order parameters \(Q_l\) and \(W_l\). These order parameters are mathematically defined according to certain rotationally invariant combinations of spherical harmonics calculated between particles and their nearest neighbors, so they provide information about local particle environments. As a result, considering distributions of these order parameters across a system can help characterize the overall system’s ordering. The primary utility of these order parameters arises from the fact that they often exhibit certain characteristic values for specific crystal structures.

In this notebook, we will use the order parameters to identify certain basic structures: BCC, FCC, and simple cubic. FCC, BCC, and simple cubic structures each exhibit characteristic values of \(Q_l\) for some \(l\) value, meaning that in a perfect crystal all the particles in one of these structures will have the same value of \(Q_l\). As a result, we can use these characteristic \(Q_l\) values to determine whether a disordered fluid is beginning to crystallize into one structure or another. The \(l\) values correspond to the \(l\) quantum number used in defining the underlying spherical harmonics; for example, the \(Q_4\) order parameter would provide a measure of 4-fold ordering.

[1]:
import freud
import numpy as np
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import util
# Try to plot using KDE if available, otherwise revert to histogram
try:
    from sklearn.neighbors.kde import KernelDensity
    kde = True
except:
    kde = False

np.random.seed(1)
[2]:
%matplotlib inline

We first construct ideal crystals and then extract the characteristic value of \(Q_l\) for each of these structures. Note that we are using the LocalQlNear class, which takes as a parameter the number of nearest neighbors to use in additional to a distance cutoff. The base LocalQl class can also be used, but it can be much more sensitive to the choice of distance cutoff; conversely, the corresponding LocalQlNear class is guaranteed to find the number of neighbors required. Such a guarantee is especially useful when trying to identify known structures that have specific coordination numbers. In this case, we know that simple cubic has a coordination number of 6, BCC has 8, and FCC has 12, so we are looking for the values of \(Q_6\), \(Q_8\), and \(Q_{12}\), respectively. Therefore, we can also enforce that we require 6, 8, and 12 nearest neighbors to be included in the calculation, respectively.

[3]:
r_max = 2

L = 6
box, sc = util.make_sc(5, 5, 5)
# The last two arguments are the quantum number l and the number of nearest neighbors.
ql = freud.order.LocalQlNear(box, r_max*2, L, L)
Ql_sc = ql.compute(sc).Ql
mean_sc = np.mean(Ql_sc)
print("The standard deviation in the values computed for simple cubic is {}".format(np.std(Ql_sc)))

L = 8
box, bcc = util.make_bcc(5, 5, 5)
ql = freud.order.LocalQlNear(box, r_max*2, L, L)
Ql_bcc = ql.compute(bcc).Ql
mean_bcc = np.mean(Ql_bcc)
print("The standard deviation in the values computed for BCC is {}".format(np.std(Ql_bcc)))

L = 12
box, fcc = util.make_fcc(5, 5, 5)
ql = freud.order.LocalQlNear(box, r_max*2, L, L)
Ql_fcc = ql.compute(fcc).Ql
mean_fcc = np.mean(Ql_fcc)
print("The standard deviation in the values computed for FCC is {}".format(np.std(Ql_fcc)))
The standard deviation in the values computed for simple cubic is 1.5993604662867256e-08
The standard deviation in the values computed for BCC is 2.443063351620367e-08
The standard deviation in the values computed for FCC is 8.293402942172179e-08

Given that the per-particle order parameter values are essentially identical to within machine precision, we can be confident that we have found the characteristic value of \(Q_l\) for each of these systems. We can now compare these values to the values of \(Q_l\) in thermalized systems to determine the extent to which they are exhibiting the ordering expected of one of these perfect crystals.

[4]:
def make_noisy_replicas(points, variances):
    """Given a set of points, return an array of those points with noise."""
    point_arrays = []
    for v in variances:
        point_arrays.append(
            points + np.random.multivariate_normal(
                mean=(0, 0, 0), cov=v*np.eye(3), size=points.shape[0]))
    return point_arrays
[5]:
variances = [0.005, 0.1, 1]
sc_arrays = make_noisy_replicas(sc, variances)
bcc_arrays = make_noisy_replicas(bcc, variances)
fcc_arrays = make_noisy_replicas(fcc, variances)
[6]:
fig, axes = plt.subplots(1, 3, figsize=(16, 5))

# Zip up the data that will be needed for each structure type.
zip_obj = zip([sc_arrays, bcc_arrays, fcc_arrays], [mean_sc, mean_bcc, mean_fcc],
              [6, 8, 12], ["Simple Cubic", "BCC", "FCC"])

for i, (arrays, ref_val, L, title) in enumerate(zip_obj):
    ax = axes[i]
    for j, (array, var) in enumerate(zip(arrays, variances)):
        ql = freud.order.LocalQlNear(box, r_max*2, L, L)
        ql.compute(array)
        if not kde:
            ax.hist(ql.Ql, label="Variance = {}".format(var), density=True)
        else:
            padding = 0.02
            N = 50
            bins = np.linspace(np.min(ql.Ql)-padding, np.max(ql.Ql)+padding, N)

            kde = KernelDensity(bandwidth=0.004)
            kde.fit(ql.Ql[:, np.newaxis])
            Ql = np.exp(kde.score_samples(bins[:, np.newaxis]))

            ax.plot(bins, Ql, label="Variance = {}".format(var))
        ax.set_title(title, fontsize=20)
        ax.tick_params(axis='both', which='both', labelsize=14)
        if j == 0:
            # Can choose any element, all are identical in the reference case
            ax.vlines(ref_val, 0, np.max(ax.get_ylim()[1]), label='Reference')
fig.legend(*ax.get_legend_handles_labels(), fontsize=18);  # Only have one legend
fig.subplots_adjust(right=0.78)
_images/examples_module_intros_Order-Steinhardt_8_0.png

From this figure, we can see that for each type of structure, increasing the amount of noise makes the distribution of the order parameter values less peaked at the expected reference value. As a result, we can use this method to identify specific structures. However, you can see even from these plots that the measures are not always good; for example, the BCC example shows minimal distinction between variances of \(0.1\) and \(1\), which you might hope to be easily distinguishable, while adding minimal noise to the FCC crystals makes the system deviate substantially from the optimal value of the order parameter. As a result, choosing the appropriate parameterization for the order parameter (which quantum number \(l\) to use, how many nearest neighbors, the \(r_{cut}\), etc) can be very important.

In addition to the simple LocalQlNear class demonstrated here and the \(r_{cut}\) based LocalQl variant, there are also the LocalWl and LocalWlNear classes. The latter two classes use the same spherical harmonics to compute a slightly different quantity: \(Q_l\) involves one way of averaging the spherical harmonics between particles and their neighbors, and \(W_l\) uses a different type of average. The \(W_l\) averages may be better at identifying some structures, so some experimentation and reference to the appropriate literature can be useful (as a starting point, see Steinhardt’s original paper).

In addition to the difference between the classes, the classes also contain additional compute methods that perform an additional type of averaging. Calling computeAve instead of compute will populate the ave_Ql (or ave_Wl) arrays, which perform an additional level of implicit averaging over the second neighbor shells of particles to accumulate more information on particle environments (see the original reference). To get a sense for the best method for analyzing a specific system, the best course of action is try out different parameters or to consult the literature to see how these have been used in the past.

PMFTXY2D

The PMFT returns the potential energy associated with finding a particle pair in a given spatial (positional and orientational) configuration. The PMFT is computed in the same manner as the RDF. The basic algorithm is described below:

for each particle i:
    for each particle j:
        v_ij = position[j] - position[i]
        bin_x, bin_y = convert_to_bin(v_ij)
        pcf_array[bin_y][bin_x]++

freud uses cell lists and parallelism to optimize this algorithm.

The data sets used in this example are a system of hard hexagons, simulated in the NVT thermodynamic ensemble in HOOMD-blue, for a dense fluid of hexagons at packing fraction \(\phi = 0.65\) and solids at packing fractions \(\phi = 0.75, 0.85\).

[1]:
import freud
freud.parallel.setNumThreads(4)
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
import util
from scipy.ndimage.filters import gaussian_filter

%matplotlib inline
matplotlib.rcParams.update({'font.size': 20,
                            'axes.titlesize': 20,
                            'axes.labelsize': 20,
                            'xtick.labelsize': 16,
                            'ytick.labelsize': 16,
                            'savefig.pad_inches': 0.025,
                            'lines.linewidth': 2})
[2]:
def plot_pmft(data_path, phi):
    # Create the pmft object
    pmft = freud.pmft.PMFTXY2D(x_max=3.0, y_max=3.0, n_x=300, n_y=300)

    # Load the data
    box_data = np.load("{}/box_data.npy".format(data_path))
    pos_data = np.load("{}/pos_data.npy".format(data_path))
    quat_data = np.load("{}/quat_data.npy".format(data_path))
    n_frames = pos_data.shape[0]

    for i in range(1, n_frames):
        # Read box, position data
        l_box = box_data[i].tolist()
        l_pos = pos_data[i]
        l_quat = quat_data[i]
        l_ang = 2*np.arctan2(l_quat[:, 3], l_quat[:, 0])
        l_ang = l_ang % (2 * np.pi)

        pmft.accumulate(l_box, l_pos, l_ang, l_pos, l_ang)

    # Get the value of the PMFT histogram bins
    pmft_arr = np.copy(pmft.PMFT)

    # Do some simple post-processing for plotting purposes
    pmft_arr[np.isinf(pmft_arr)] = np.nan
    dx = (2.0 * 3.0) / pmft.n_bins_X
    dy = (2.0 * 3.0) / pmft.n_bins_Y
    nan_arr = np.where(np.isnan(pmft_arr))
    for i in range(pmft.n_bins_X):
        x = -3.0 + dx * i
        for j in range(pmft.n_bins_Y):
            y = -3.0 + dy * j
            if ((x*x + y*y < 1.5) and (np.isnan(pmft_arr[j, i]))):
                pmft_arr[j, i] = 10.0
    w = int(2.0 * pmft.n_bins_X / (2.0 * 3.0))
    center = int(pmft.n_bins_X / 2)

    # Get the center of the histogram bins
    pmft_smooth = gaussian_filter(pmft_arr, 1)
    pmft_image = np.copy(pmft_smooth)
    pmft_image[nan_arr] = np.nan
    pmft_smooth = pmft_smooth[center-w:center+w, center-w:center+w]
    pmft_image = pmft_image[center-w:center+w, center-w:center+w]
    x = pmft.X
    y = pmft.Y
    reduced_x = x[center-w:center+w]
    reduced_y = y[center-w:center+w]

    # Plot figures
    f = plt.figure(figsize=(12, 5), facecolor='white')
    values = [-2, -1, 0, 2]
    norm = matplotlib.colors.Normalize(vmin=-2.5, vmax=3.0)
    n_values = [norm(i) for i in values]
    colors = matplotlib.cm.viridis(n_values)
    colors = colors[:, :3]
    verts = util.make_polygon(sides=6, radius=0.6204)
    lims = (-2, 2)
    ax0 = f.add_subplot(1, 2, 1)
    ax1 = f.add_subplot(1, 2, 2)
    for ax in (ax0, ax1):
        ax.contour(reduced_x, reduced_y, pmft_smooth,
                   [9, 10], colors='black')
        ax.contourf(reduced_x, reduced_y, pmft_smooth,
                    [9, 10], hatches='X', colors='none')
        ax.plot(verts[:,0], verts[:,1], color='black', marker=',')
        ax.fill(verts[:,0], verts[:,1], color='black')
        ax.set_aspect('equal')
        ax.set_xlim(lims)
        ax.set_ylim(lims)
        ax.xaxis.set_ticks([i for i in range(lims[0], lims[1]+1)])
        ax.yaxis.set_ticks([i for i in range(lims[0], lims[1]+1)])
        ax.set_xlabel(r'$x$')
        ax.set_ylabel(r'$y$')

    ax0.set_title('PMFT Heat Map, $\phi = {}$'.format(phi))
    im = ax0.imshow(np.flipud(pmft_image),
                    extent=[lims[0], lims[1], lims[0], lims[1]],
                    interpolation='nearest', cmap='viridis',
                    vmin=-2.5, vmax=3.0)
    ax1.set_title('PMFT Contour Plot, $\phi = {}$'.format(phi))
    ax1.contour(reduced_x, reduced_y, pmft_smooth,
            [-2, -1, 0, 2], colors=colors)

    f.subplots_adjust(right=0.85)
    cbar_ax = f.add_axes([0.88, 0.1, 0.02, 0.8])
    f.colorbar(im, cax=cbar_ax)
    plt.show()
65% density

The plot below shows the PMFT of hexagons at 65% density. The hexagons tend to be close to one another, in the darker regions (the lower values of the potential of mean force and torque).

The hatched region near the black hexagon in the center is a region where no data were collected: the hexagons are hard shapes and cannot overlap, so there is an excluded region of space close to the hexagon.

The ring around the hexagon where the PMFT rises and then falls corresponds to the minimum of the radial distribution function – particles tend to not occupy that region, preferring instead to be at close range (in the first neighbor shell) or further away (in the second neighbor shell).

[3]:
plot_pmft('data/phi065', 0.65)
_images/examples_module_intros_PMFT-PMFTXY2D_4_0.png
75% density

As the system density is increased to 75%, the propensity for hexagons to occupy the six sites on the faces of their neighbors increases, as seen by the deeper (darker) wells of the PMFT. Conversely, the shapes strongly dislike occupying the yellow regions, and no particle pairs occupied the white region (so there is no data).

[4]:
plot_pmft('data/phi075', 0.75)
_images/examples_module_intros_PMFT-PMFTXY2D_6_0.png
85% density

Finally, at 85% density, there is a large region where no neighbors can be found, and hexagons strictly occupy sites near those of the perfect hexagonal lattice, at the first- and second-neighbor shells. The wells are deeper and much more spatially confined that those of the systems at lower densities.

[5]:
plot_pmft('data/phi085', 0.85)
_images/examples_module_intros_PMFT-PMFTXY2D_8_0.png
Shifting Example

This notebook shows how to use the shifting option on PMFTXYZ to get high resolution views of PMFT features that are not centered.

[1]:
import numpy as np
from freud import box, pmft

from scipy.interpolate import griddata
from scipy.interpolate import RegularGridInterpolator

import warnings
warnings.simplefilter('ignore')

import matplotlib.pyplot as plt

First we load in our data. The particles used here are implemented with a simple Weeks-Chandler-Andersen isotropic pair potential, so particle orientation is not meaningful.

[2]:
pos_data = np.load('data/XYZ/positions.npy').astype(np.float32)
box_data = np.load('data/XYZ/boxes.npy').astype(np.float32)

We calculate the PMFT the same way as shown in other examples first

[3]:
window = 2**(1/6) # The size of the pmft calculation

res = (100,100,100)
pmft_arr = np.zeros(res)

mypmft = pmft.PMFTXYZ(x_max=window, y_max=window, z_max=window,
                      n_x=res[0], n_y=res[1], n_z=res[2])

# This data is for isotropic particles, so we will just make some unit quaternions
# to use as the orientations
quats = np.zeros((pos_data.shape[1],4)).astype(np.float32)
quats[:,0] = 1

for i in range(10, pos_data.shape[0]):
    l_box = box_data[i]
    l_pos = pos_data[i]
    mypmft.accumulate(l_box, l_pos, quats, l_pos, quats)

unshifted = np.copy(mypmft.PMFT)

x = mypmft.X
y = mypmft.Y
z = mypmft.Z

When we plot a centered slice of the XYZ pmft, we see that a number of wells are present at some distance from the origin

[4]:
%matplotlib inline

plt.figure(figsize=(10,10))
plt.imshow(unshifted[int(res[0]/2),:,:])
plt.colorbar()
plt.show()
_images/examples_module_intros_PMFT-PMFTXYZ_Shift_Example_7_0.png

If we want a closer look at the details of those wells, then we could increase the PMFT resolution. But this will increase the computational cost by a lot, and we are wasting a big percentage of the pixels.

This use case is why the shiftvec argument was implemented. Now we will do the same calculation, but we will use a much smaller window centered on on of the wells.

To do this we need to pass a vector into the PMFTXYZ construction. The window will be centered on this vector.

[5]:
shiftvec = [0.82, 0.82, 0]

window = 2**(1/6)/6 # Smaller window for the shifted case

res = (50,50,50)
pmft_arr = np.zeros(res)

mypmft = pmft.PMFTXYZ(x_max=window, y_max=window, z_max=window,
                      n_x=res[0], n_y=res[1], n_z=res[2], shiftvec=shiftvec)

# This data is for isotropic particles, so we will just make some unit quaternions
# to use as the orientations
quats = np.zeros((pos_data.shape[1],4)).astype(np.float32)
quats[:,0] = 1

for i in range(10,pos_data.shape[0]):
    l_box = box_data[i]
    l_pos = pos_data[i]
    mypmft.accumulate(l_box, l_pos, quats, l_pos, quats)

shifted = np.copy(mypmft.PMFT)

x = mypmft.X
y = mypmft.Y
z = mypmft.Z

Now the PMFT is a high resolution close up of one of the bonding wells. Note that as you increase the sampling resolution, you need to increase your number of samples because there is less averaging in each bin

[6]:
%matplotlib inline

plt.figure(figsize=(10,10))
plt.imshow(shifted[int(res[2]/2),:,:])
plt.colorbar()
plt.show()
_images/examples_module_intros_PMFT-PMFTXYZ_Shift_Example_11_0.png
Voronoi

The voronoi module finds the Voronoi diagram of a set of points, while respecting periodic boundary conditions (which are not handled by scipy.spatial.Voronoi, documentation). This is handled by replicating the points using periodic images that lie outside the box, up to a specified buffer distance.

This case is two-dimensional (with z=0 for all particles) for simplicity, but the voronoi module works for both 2D and 3D simulations.

[1]:
import numpy as np
import freud
import matplotlib
import matplotlib.pyplot as plt

First, we generate some sample points.

[2]:
points = np.array([
    [-0.5, -0.5],
    [0.5, -0.5],
    [-0.5, 0.5],
    [0.5, 0.5]])
plt.scatter(points[:,0], points[:,1])
plt.title('Points')
plt.xlim((-1, 1))
plt.ylim((-1, 1))
plt.show()

# We must add a z=0 component to this array for freud
points = np.hstack((points, np.zeros((points.shape[0], 1))))
_images/examples_module_intros_Voronoi-Voronoi_3_0.png

Now we create a box and a voronoi compute object. Note that the buffer distance must be large enough to ensure that the points are duplicated sufficiently far outside the box for the qhull algorithm. Results are only guaranteed to be correct when the buffer is large enough: buffer \(>= L/2\), for the longest box side length \(L\).

[3]:
L = 2
box = freud.box.Box.square(L)
voro = freud.voronoi.Voronoi(box, L/2)

Next, we use the compute method to determine the Voronoi polytopes (cells) and the polytopes property to return their coordinates. Note that we use freud’s method chaining here, where a compute method returns the compute object.

[4]:
cells = voro.compute(box=box, positions=points).polytopes
print(cells)
[array([[ 0., -1.,  0.],
       [-1., -1.,  0.],
       [-1.,  0.,  0.],
       [ 0.,  0.,  0.]]), array([[ 0., -1.,  0.],
       [ 0.,  0.,  0.],
       [ 1.,  0.,  0.],
       [ 1., -1.,  0.]]), array([[-1.,  0.,  0.],
       [ 0.,  0.,  0.],
       [ 0.,  1.,  0.],
       [-1.,  1.,  0.]]), array([[0., 0., 0.],
       [0., 1., 0.],
       [1., 1., 0.],
       [1., 0., 0.]])]

We create a helper function to draw the Voronoi polygons using matplotlib.

[5]:
def draw_voronoi(box, points, cells, nlist=None, color_by_sides=False):
    ax = plt.gca()
    # Draw Voronoi cells
    patches = [plt.Polygon(cell[:, :2]) for cell in cells]
    patch_collection = matplotlib.collections.PatchCollection(patches, edgecolors='black', alpha=0.4)
    cmap = plt.cm.Set1

    if color_by_sides:
        colors = [len(cell) for cell in voro.polytopes]
    else:
        colors = np.random.permutation(np.arange(len(patches)))

    cmap = plt.cm.get_cmap('Set1', np.unique(colors).size)
    bounds = np.array(range(min(colors), max(colors)+2))

    patch_collection.set_array(np.array(colors))
    patch_collection.set_cmap(cmap)
    patch_collection.set_clim(bounds[0], bounds[-1])
    ax.add_collection(patch_collection)

    # Draw points
    plt.scatter(points[:,0], points[:,1], c=colors)
    plt.title('Voronoi Diagram')
    plt.xlim((-box.Lx/2, box.Lx/2))
    plt.ylim((-box.Ly/2, box.Ly/2))

    # Set equal aspect and draw box
    ax.set_aspect('equal', 'datalim')
    box_patch = plt.Rectangle([-box.Lx/2, -box.Ly/2], box.Lx, box.Ly, alpha=1, fill=None)
    ax.add_patch(box_patch)

    # Draw neighbor lines
    if nlist is not None:
        bonds = np.asarray([points[j] - points[i] for i, j in zip(nlist.index_i, nlist.index_j)])
        box.wrap(bonds)
        line_data = np.asarray([[points[nlist.index_i[i]],
                                 points[nlist.index_i[i]]+bonds[i]] for i in range(len(nlist.index_i))])
        line_data = line_data[:, :, :2]
        line_collection = matplotlib.collections.LineCollection(line_data, alpha=0.3)
        ax.add_collection(line_collection)

    # Show colorbar for number of sides
    if color_by_sides:
        cb = plt.colorbar(patch_collection, ax=ax, ticks=bounds, boundaries=bounds)
        cb.set_ticks(cb.formatter.locs + 0.5)
        cb.set_ticklabels((cb.formatter.locs - 0.5).astype('int'))
        cb.set_label("Number of sides", fontsize=12)
    plt.show()

Now we can draw the Voronoi diagram from the example above.

[6]:
draw_voronoi(box, points, voro.polytopes)
_images/examples_module_intros_Voronoi-Voronoi_11_0.png

This also works for more complex cases, such as this hexagonal lattice.

[7]:
def hexagonal_lattice(rows=3, cols=3, noise=0):
    # Assemble a hexagonal lattice
    points = []
    for row in range(rows*2):
        for col in range(cols):
            x = (col + (0.5 * (row % 2)))*np.sqrt(3)
            y = row*0.5
            points.append((x, y, 0))
    points = np.asarray(points)
    points += np.random.multivariate_normal(mean=np.zeros(3), cov=np.eye(3)*noise, size=points.shape[0])
    # Set z=0 again for all points after adding Gaussian noise
    points[:, 2] = 0

    # Wrap the points into the box
    box = freud.box.Box(Lx=cols*np.sqrt(3), Ly=rows, is2D=True)
    points = box.wrap(points)
    return box, points

# Compute the Voronoi diagram and plot
box, points = hexagonal_lattice()
voro = freud.voronoi.Voronoi(box, np.max(box.L)/2)
voro.compute(box=box, positions=points)
draw_voronoi(box, points, voro.polytopes)
_images/examples_module_intros_Voronoi-Voronoi_13_0.png

For noisy data, we see that the Voronoi diagram can change substantially. We perturb the positions with 2D Gaussian noise:

[8]:
box, points = hexagonal_lattice(rows=4, cols=4, noise=0.04)
voro = freud.voronoi.Voronoi(box, np.max(box.L)/2)
voro.compute(box=box, positions=points)
draw_voronoi(box, points, voro.polytopes)
_images/examples_module_intros_Voronoi-Voronoi_15_0.png

If we color by the number of sides of each Voronoi cell, we can see patterns in the defects: 5-gons and 7-gons tend to pair up.

[9]:
draw_voronoi(box, points, voro.polytopes, color_by_sides=True)
_images/examples_module_intros_Voronoi-Voronoi_17_0.png

We can also compute the volumes of the Voronoi cells. Here, we plot them as a histogram:

[10]:
voro.computeVolumes()
plt.hist(voro.volumes)
plt.title('Voronoi cell volumes')
plt.show()
_images/examples_module_intros_Voronoi-Voronoi_19_0.png

The voronoi module also provides freud.locality.NeighborList objects, where particles are neighbors if they share an edge in the Voronoi diagram. The NeighborList effectively represents the bonds in the Delaunay triangulation.

[11]:
voro.computeNeighbors(box=box, positions=points)
nlist = voro.nlist
draw_voronoi(box, points, voro.polytopes, nlist=nlist)
_images/examples_module_intros_Voronoi-Voronoi_21_0.png

The voronoi property stores the raw scipy.spatial.qhull.Voronoi object from scipy.

Important: The plots below also show the replicated buffer points, not just the ones inside the periodic box. freud.voronoi is intended for use with periodic systems while scipy.spatial.Voronoi does not recognize periodic boundaries.

[12]:
print(type(voro.voronoi))
from scipy.spatial import voronoi_plot_2d
voronoi_plot_2d(voro.voronoi)
plt.title('Raw Voronoi diagram including buffer points')
plt.show()
<class 'scipy.spatial.qhull.Voronoi'>
_images/examples_module_intros_Voronoi-Voronoi_23_1.png

Example Analyses

The examples below go into greater detail about specific applications of freud and use cases that its analysis methods enable, such as user-defined analyses, machine learning, and data visualization.

Implementing Common Neighbor Analysis as a custom method

Researchers commonly wish to implement their own custom analysis methods for particle simulations. Here, we show an example of how to write Common Neighbor Analysis (Honeycutt and Andersen, J. Phys. Chem. 91, 4950) as a custom method using freud and the NetworkX package.

NetworkX can be installed with pip install networkx.

First, we generate random points and determine which points share neighbors.

[1]:
import freud
import numpy as np
from collections import defaultdict
from util import make_fcc


# Use a noisy face-centered cubic (fcc) system
L = 3
box, points = make_fcc(nx=4, ny=4, nz=4, noise=0.)
lc = freud.locality.LinkCell(box, 1+np.sqrt(0.5))
nl = lc.compute(box, points).nlist

# Get all sets of common neighbors.
common_neighbors = defaultdict(list)
for i, p in enumerate(points):
    for j in nl.index_j[nl.index_i == i]:
        for k in nl.index_j[nl.index_i == j]:
            if i != k:
                common_neighbors[(i, k)].append(j)

Next, we use NetworkX to build graphs of common neighbors and compute the Common Neighbor Analysis signatures.

[2]:
import networkx as nx
from collections import Counter

diagrams = defaultdict(list)
particle_counts = defaultdict(Counter)

for (a, b), neighbors in common_neighbors.items():
    # Build up the graph of connections between the
    # common neighbors of a and b.
    g = nx.Graph()
    for i in neighbors:
        for j in set(nl.index_j[
            nl.index_i == i]).intersection(neighbors):
            g.add_edge(i, j)

    # Define the identifiers for a CNA diagram:
    # The first integer is 1 if the particles are bonded, otherwise 2
    # The second integer is the number of shared neighbors
    # The third integer is the number of bonds among shared neighbors
    # The fourth integer is an index, just to ensure uniqueness of diagrams
    diagram_type = 2-int(b in nl.index_j[nl.index_i == a])
    key = (diagram_type, len(neighbors), g.number_of_edges())
    # If we've seen any neighborhood graphs with this signature,
    # we explicitly check if the two graphs are identical to
    # determine whether to save this one. Otherwise, we add
    # the new graph immediately.
    if key in diagrams:
        isomorphs = [nx.is_isomorphic(g, h) for h in diagrams[key]]
        if any(isomorphs):
            idx = isomorphs.index(True)
        else:
            diagrams[key].append(g)
            idx = diagrams[key].index(g)
    else:
        diagrams[key].append(g)
        idx = diagrams[key].index(g)
    cna_signature = key + (idx,)
    particle_counts[a].update([cna_signature])

Looking at the counts of common neighbor signatures, we see that the first particle of the fcc structure has 12 bonds with signature \((1, 4, 2, 0)\) as we expect.

[3]:
particle_counts[0]
[3]:
Counter({(1, 4, 2, 0): 12,
         (2, 4, 4, 0): 6,
         (2, 2, 1, 0): 24,
         (2, 1, 0, 0): 12})
Analyzing simulation data from HOOMD-blue at runtime

The following script shows how to use freud to compute the radial distribution function \(g(r)\) on data generated by the molecular dynamics simulation engine HOOMD-blue during a simulation run.

Generally, most users will want to run analyses as *post-processing* steps, on the saved frames of a particle trajectory file. However, it is possible to use analysis callbacks in HOOMD-blue to compute and log quantities at runtime, too. By using analysis methods at runtime, it is possible to stop a simulation early or change the simulation parameters dynamically according to the analysis results.

HOOMD-blue can be installed with conda install -c conda-forge hoomd.

The simulation script runs a Monte Carlo simulation of spheres, with outputs parsed with numpy.genfromtxt.

[1]:
%matplotlib inline
import hoomd
from hoomd import hpmc
import freud
import numpy as np
import matplotlib.pyplot as plt
[2]:
hoomd.context.initialize('')
system = hoomd.init.create_lattice(
    hoomd.lattice.sc(a=1), n=10)
mc = hpmc.integrate.sphere(seed=42, d=0.1, a=0.1)
mc.shape_param.set('A', diameter=0.5)

rdf = freud.density.RDF(rmax=4, dr=0.1)

box = freud.box.Box.from_box(system.box)
w6 = freud.order.LocalWlNear(box, 4, 6, 12)

def calc_rdf(timestep):
    hoomd.util.quiet_status()
    snap = system.take_snapshot()
    hoomd.util.unquiet_status()
    rdf.accumulate(box, snap.particles.position)

def calc_W6(timestep):
    hoomd.util.quiet_status()
    snap = system.take_snapshot()
    hoomd.util.unquiet_status()
    w6.compute(snap.particles.position)
    return np.mean(np.real(w6.Wl))

# Equilibrate the system a bit before accumulating the RDF.
hoomd.run(1e4)
hoomd.analyze.callback(calc_rdf, period=100)

logger = hoomd.analyze.log(filename='output.log',
                           quantities=['w6'],
                           period=100,
                           header_prefix='#',
                           overwrite=True)

logger.register_callback('w6', calc_W6)

hoomd.run(1e4)

# Store the computed RDF in a file
np.savetxt('rdf.csv', np.vstack((rdf.R, rdf.RDF)).T,
           delimiter=',', header='r, rdf')
HOOMD-blue v2.6.0-7-g60513d253 DOUBLE HPMC_MIXED MPI TBB SSE SSE2 SSE3 SSE4_1 SSE4_2 AVX AVX2
Compiled: 06/11/2019
Copyright (c) 2009-2019 The Regents of the University of Michigan.
-----
You are using HOOMD-blue. Please cite the following:
* J A Anderson, C D Lorenz, and A Travesset. "General purpose molecular dynamics
  simulations fully implemented on graphics processing units", Journal of
  Computational Physics 227 (2008) 5342--5359
* J Glaser, T D Nguyen, J A Anderson, P Liu, F Spiga, J A Millan, D C Morse, and
  S C Glotzer. "Strong scaling of general-purpose molecular dynamics simulations
  on GPUs", Computer Physics Communications 192 (2015) 97--107
-----
-----
You are using HPMC. Please cite the following:
* J A Anderson, M E Irrgang, and S C Glotzer. "Scalable Metropolis Monte Carlo
  for simulation of hard shapes", Computer Physics Communications 204 (2016) 21
  --30
-----
HOOMD-blue is running on the CPU
notice(2): Group "all" created containing 1000 particles
** starting run **
Time 00:00:10 | Step 3869 / 10000 | TPS 386.859 | ETA 00:00:15
Time 00:00:20 | Step 7834 / 10000 | TPS 396.436 | ETA 00:00:05
Time 00:00:25 | Step 10000 / 10000 | TPS 386.636 | ETA 00:00:00
Average TPS: 390.536
---------
notice(2): -- HPMC stats:
notice(2): Average translate acceptance: 0.933106
notice(2): Trial moves per second:        1.56207e+06
notice(2): Overlap checks per second:     4.05885e+07
notice(2): Overlap checks per trial move: 25.9838
notice(2): Number of overlap errors:      0
** run complete **
** starting run **
Time 00:00:35 | Step 13151 / 20000 | TPS 315.038 | ETA 00:00:21
Time 00:00:45 | Step 16301 / 20000 | TPS 314.415 | ETA 00:00:11
Time 00:00:55 | Step 19519 / 20000 | TPS 321.741 | ETA 00:00:01
Time 00:00:57 | Step 20000 / 20000 | TPS 329.027 | ETA 00:00:00
Average TPS: 317.613
---------
notice(2): -- HPMC stats:
notice(2): Average translate acceptance: 0.932846
notice(2): Trial moves per second:        1.2704e+06
notice(2): Overlap checks per second:     3.29464e+07
notice(2): Overlap checks per trial move: 25.9338
notice(2): Number of overlap errors:      0
** run complete **
[3]:
rdf_data = np.genfromtxt('rdf.csv', delimiter=',')
plt.plot(rdf_data[:, 0], rdf_data[:, 1])
plt.title('Radial Distribution Function')
plt.xlabel('$r$')
plt.ylabel('$g(r)$')
plt.show()
_images/examples_examples_HOOMD-MC-W6_HOOMD-MC-W6_3_0.png
[4]:
w6_data = np.genfromtxt('output.log')
plt.plot(w6_data[:, 0], w6_data[:, 1])
plt.title('$W_6$ Order Parameter')
plt.xlabel('$t$')
plt.ylabel('$W_6(t)$')
plt.show()
_images/examples_examples_HOOMD-MC-W6_HOOMD-MC-W6_4_0.png
Analyzing data from LAMMPS

The following script shows how to use freud to compute the radial distribution function \(g(r)\) on data generated by the molecular dynamics simulation engine LAMMPS. The input script runs a Lennard-Jones system, which is then parsed with numpy.genfromtxt.

The input script is below. Note that we must dump images with ix iy iz, so that the mean squared displacement can be calculated correctly.

[1]:
!cat lj.in

Next, we run LAMMPS to generate the output file. LAMMPS can be installed with conda install -c conda-forge lammps.

[2]:
!lmp_serial -in lj.in
LAMMPS (5 Jun 2019)
Created orthogonal box = (0 0 0) to (10 10 10)
  1 by 1 by 1 MPI processor grid
Lattice spacing in x,y,z = 1.25992 1.25992 1.25992
Created 512 atoms
  create_atoms CPU = 0.00142598 secs
Neighbor list info ...
  update every 1 steps, delay 10 steps, check yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 2.8
  ghost atom cutoff = 2.8
  binsize = 1.4, bins = 8 8 8
  1 neighbor lists, perpetual/occasional/extra = 1 0 0
  (1) pair lj/cut, perpetual
      attributes: half, newton on
      pair build: half/bin/newton
      stencil: half/bin/3d/newton
      bin: standard
Setting up Verlet run ...
  Unit style    : lj
  Current step  : 0
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 6.109 | 6.109 | 6.109 Mbytes
Step PotEng KinEng TotEng Temp Press Density
       0   -1804.3284        766.5   -1037.8284            1   -2.1872025        0.512
     100   -1834.8127    774.55302   -1060.2596    1.0105062  -0.32671112        0.512
     200   -1852.2773    789.53605   -1062.7413    1.0300536  -0.30953463        0.512
     300   -1857.4621    795.78772   -1061.6744    1.0382097  -0.22960441        0.512
     400    -1864.766    801.81089   -1062.9551    1.0460677  -0.24901206        0.512
     500   -1860.0198    796.65657   -1063.3633    1.0393432  -0.14280039        0.512
     600   -1859.1835    796.96259    -1062.221    1.0397425   -0.2828161        0.512
     700   -1848.9874    786.01864   -1062.9688    1.0254646  -0.34512435        0.512
     800   -1821.7263    759.86418   -1061.8622    0.9913427   -0.1766353        0.512
     900   -1840.7256    777.68022   -1063.0453    1.0145861    -0.318844        0.512
    1000   -1862.6606    799.32963   -1063.3309    1.0428306  -0.25224674        0.512
Loop time of 0.201545 on 1 procs for 1000 steps with 512 atoms

Performance: 2143441.910 tau/day, 4961.671 timesteps/s
99.2% CPU use with 1 MPI tasks x no OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 0.13586    | 0.13586    | 0.13586    |   0.0 | 67.41
Bond    | 8.75e-05   | 8.75e-05   | 8.75e-05   |   0.0 |  0.04
Neigh   | 0.05059    | 0.05059    | 0.05059    |   0.0 | 25.10
Comm    | 0.0088689  | 0.0088689  | 0.0088689  |   0.0 |  4.40
Output  | 0.0002439  | 0.0002439  | 0.0002439  |   0.0 |  0.12
Modify  | 0.0044417  | 0.0044417  | 0.0044417  |   0.0 |  2.20
Other   |            | 0.001454   |            |       |  0.72

Nlocal:    512 ave 512 max 512 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost:    1447 ave 1447 max 1447 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs:    12018 ave 12018 max 12018 min
Histogram: 1 0 0 0 0 0 0 0 0 0

Total # of neighbors = 12018
Ave neighs/atom = 23.4727
Ave special neighs/atom = 0
Neighbor list builds = 100
Dangerous builds = 100
Setting up Verlet run ...
  Unit style    : lj
  Current step  : 1000
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 6.109 | 6.109 | 6.109 Mbytes
Step PotEng KinEng TotEng Temp Press Density
    1000   -1862.6606    799.32963   -1063.3309    1.0428306  -0.25224674        0.512
    1100   -1853.4242    819.28434   -1034.1399    1.0688641  -0.16446166        0.512
    1200   -1840.5875    793.33971   -1047.2477    1.0350159  -0.21578932        0.512
    1300   -1838.9016     796.0771   -1042.8245    1.0385872  -0.19354995        0.512
    1400   -1848.5392     752.5312    -1096.008   0.98177587  -0.22928676        0.512
    1500   -1856.8763    746.44097   -1110.4353   0.97383035  -0.18936813        0.512
    1600   -1869.5931    732.08398   -1137.5091   0.95509978   -0.2751998        0.512
    1700   -1887.7451    761.66169   -1126.0834   0.99368779  -0.35301947        0.512
    1800   -1882.9325    729.51153    -1153.421   0.95174368  -0.33872437        0.512
    1900   -1867.9452    763.40829   -1104.5369   0.99596646  -0.30614623        0.512
    2000   -1874.4475     752.8181   -1121.6294   0.98215017  -0.30908533        0.512
Loop time of 0.206815 on 1 procs for 1000 steps with 512 atoms

Performance: 2088823.301 tau/day, 4835.239 timesteps/s
98.9% CPU use with 1 MPI tasks x no OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 0.13859    | 0.13859    | 0.13859    |   0.0 | 67.01
Bond    | 9.3222e-05 | 9.3222e-05 | 9.3222e-05 |   0.0 |  0.05
Neigh   | 0.05101    | 0.05101    | 0.05101    |   0.0 | 24.66
Comm    | 0.0086555  | 0.0086555  | 0.0086555  |   0.0 |  4.19
Output  | 0.00020504 | 0.00020504 | 0.00020504 |   0.0 |  0.10
Modify  | 0.0067122  | 0.0067122  | 0.0067122  |   0.0 |  3.25
Other   |            | 0.001549   |            |       |  0.75

Nlocal:    512 ave 512 max 512 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost:    1464 ave 1464 max 1464 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs:    11895 ave 11895 max 11895 min
Histogram: 1 0 0 0 0 0 0 0 0 0

Total # of neighbors = 11895
Ave neighs/atom = 23.2324
Ave special neighs/atom = 0
Neighbor list builds = 100
Dangerous builds = 100
Setting up Verlet run ...
  Unit style    : lj
  Current step  : 2000
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 7.383 | 7.383 | 7.383 Mbytes
Step PotEng KinEng TotEng Temp Press Density
    2000   -1874.4475     752.8181   -1121.6294   0.98215017  -0.30908533        0.512
    2100   -1858.3201     763.1433   -1095.1768   0.99562074  -0.25351893        0.512
    2200   -1866.9213    770.43352   -1096.4878    1.0051318  -0.27646217        0.512
    2300   -1879.7957    721.28174    -1158.514   0.94100683  -0.31881659        0.512
    2400   -1886.0524    740.29981   -1145.7526   0.96581841  -0.36988824        0.512
    2500   -1862.4955    731.77932   -1130.7162   0.95470231  -0.23656666        0.512
    2600    -1847.542    748.14185   -1099.4002   0.97604938  -0.22297358        0.512
    2700   -1863.1603    715.01181   -1148.1485   0.93282689  -0.27535839        0.512
    2800   -1858.9263    711.64082   -1147.2855   0.92842899  -0.31272288        0.512
    2900   -1862.0527     788.4678   -1073.5849    1.0286599  -0.20135611        0.512
    3000   -1848.1516    797.66227   -1050.4894    1.0406553  -0.27353978        0.512
    3100   -1883.8621    793.05475   -1090.8073    1.0346442  -0.29972206        0.512
    3200   -1890.4065    791.32467   -1099.0819     1.032387  -0.35642545        0.512
    3300   -1859.2997    745.34089   -1113.9588   0.97239516  -0.26722308        0.512
    3400   -1869.8929    762.57135   -1107.3216   0.99487457  -0.14226646        0.512
    3500   -1879.6557    732.72846   -1146.9273   0.95594058  -0.21775981        0.512
    3600   -1899.0227    766.18046   -1132.8422   0.99958312   -0.2798366        0.512
    3700   -1872.6895    817.06218   -1055.6273     1.065965  -0.23193326        0.512
    3800   -1891.1356    802.56843   -1088.5672     1.047056  -0.23387156        0.512
    3900    -1840.088    753.28729   -1086.8007   0.98276228  -0.21465531        0.512
    4000   -1882.7617    803.22857   -1079.5332    1.0479172  -0.31896543        0.512
    4100   -1873.9061    787.05281   -1086.8533    1.0268138  -0.26608644        0.512
    4200   -1871.6627    832.59728   -1039.0655    1.0862326  -0.29040189        0.512
    4300   -1865.3725    819.61212   -1045.7603    1.0692917  -0.22592305        0.512
    4400   -1875.5306    806.71297   -1068.8176    1.0524631  -0.31604788        0.512
    4500    -1857.109    828.16158   -1028.9474    1.0804456   -0.2464398        0.512
    4600   -1857.8912     729.7257   -1128.1655    0.9520231  -0.31385004        0.512
    4700    -1842.205    734.17836   -1108.0267   0.95783217  -0.27130372        0.512
    4800   -1864.7696    776.14641   -1088.6232     1.012585  -0.31668109        0.512
    4900   -1858.1103    793.41913   -1064.6911    1.0351195  -0.16583366        0.512
    5000   -1867.7818    815.23276   -1052.5491    1.0635783  -0.28680645        0.512
    5100   -1838.0477      725.412   -1112.6357    0.9463953  -0.28647867        0.512
    5200   -1810.7731     731.9772   -1078.7959   0.95496047  -0.16033508        0.512
    5300   -1837.5311    749.48424   -1088.0469   0.97780071  -0.20281441        0.512
    5400   -1873.1094    764.60064   -1108.5088   0.99752204  -0.41358648        0.512
    5500   -1888.9361    748.61774   -1140.3184   0.97667025  -0.36938658        0.512
    5600   -1869.9513    762.05258   -1107.8988   0.99419776   -0.4223791        0.512
    5700    -1858.339    746.55871   -1111.7803   0.97398396  -0.42269281        0.512
    5800   -1863.2613    749.34951   -1113.9118   0.97762493  -0.38710722        0.512
    5900   -1873.7293    773.93107   -1099.7982    1.0096948  -0.26021895        0.512
    6000    -1873.456    787.00426   -1086.4518    1.0267505  -0.22677264        0.512
    6100   -1856.3965    789.71834   -1066.6782    1.0302914  -0.23662444        0.512
    6200   -1868.1487    781.09973   -1087.0489    1.0190473  -0.13471937        0.512
    6300   -1873.9941    740.70637   -1133.2877   0.96634882  -0.26089329        0.512
    6400   -1879.5293    758.83006   -1120.6993   0.98999355  -0.40717493        0.512
    6500    -1873.208    730.21233   -1142.9956   0.95265797  -0.33679524        0.512
    6600    -1893.088    738.17171   -1154.9163   0.96304202  -0.34898503        0.512
    6700   -1854.9994    735.97428   -1119.0252   0.96017518  -0.28228204        0.512
    6800   -1841.9759    797.06384   -1044.9121    1.0398745  -0.19145452        0.512
    6900   -1850.4935    786.14747   -1064.3461    1.0256327  -0.29327665        0.512
    7000   -1845.6749    797.15417   -1048.5207    1.0399924  -0.45867335        0.512
    7100     -1831.03    827.34343   -1003.6866    1.0793782    -0.179498        0.512
    7200   -1888.1042    749.22706   -1138.8771   0.97746518  -0.53010406        0.512
    7300   -1859.9233     754.0352   -1105.8881   0.98373803  -0.39545192        0.512
    7400   -1851.9183    787.60897   -1064.3093    1.0275394  -0.37094061        0.512
    7500   -1848.0739    759.73299   -1088.3409   0.99117155  -0.34780329        0.512
    7600   -1853.6532    764.84642   -1088.8067   0.99784269 -0.098590718        0.512
    7700   -1876.6886    756.38707   -1120.3016   0.98680636  -0.17912577        0.512
    7800   -1857.6403    719.20424   -1138.4361   0.93829647  -0.32247855        0.512
    7900   -1891.2369    707.44358   -1183.7933   0.92295314  -0.44928961        0.512
    8000   -1930.5545    747.85472   -1182.6997   0.97567478   -0.2607688        0.512
    8100   -1931.3403    744.07929    -1187.261   0.97074924  -0.36763161        0.512
    8200   -1920.9036     757.0399   -1163.8637   0.98765806  -0.29103201        0.512
    8300   -1904.5561    747.57535   -1156.9807    0.9753103  -0.38464012        0.512
    8400   -1844.7405    820.31281   -1024.4277    1.0702059 -0.044405706        0.512
    8500   -1860.3078    809.13555   -1051.1723    1.0556237 -0.018849627        0.512
    8600   -1841.1531    776.85955   -1064.2935    1.0135154 -0.080192818        0.512
    8700   -1860.6583      785.807   -1074.8513    1.0251885  -0.29734141        0.512
    8800   -1841.0455    779.78036   -1061.2651     1.017326  -0.11420405        0.512
    8900   -1887.3837    878.92659   -1008.4571    1.1466753  -0.34666733        0.512
    9000   -1879.4834    767.25891   -1112.2245    1.0009901   -0.3331713        0.512
    9100   -1900.1999    818.54475   -1081.6552    1.0678992  -0.19458572        0.512
    9200   -1882.1203    794.90843   -1087.2118    1.0370625  -0.25879106        0.512
    9300   -1893.5664    783.13068   -1110.4357    1.0216969  -0.25735285        0.512
    9400   -1893.5147    756.00962   -1137.5051   0.98631392  -0.26461519        0.512
    9500   -1908.8115    742.60538   -1166.2061   0.96882633   -0.4468834        0.512
    9600   -1887.0565    762.24949    -1124.807   0.99445465  -0.36695082        0.512
    9700   -1878.5858    771.53563   -1107.0502    1.0065696   -0.2300855        0.512
    9800   -1848.4047    752.27373   -1096.1309   0.98143997  -0.28729274        0.512
    9900    -1865.561    731.41466   -1134.1464   0.95422656   -0.3874617        0.512
   10000   -1887.2808    787.80237   -1099.4784    1.0277917  -0.26779032        0.512
Loop time of 1.67987 on 1 procs for 8000 steps with 512 atoms

Performance: 2057303.356 tau/day, 4762.276 timesteps/s
98.6% CPU use with 1 MPI tasks x no OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 1.1024     | 1.1024     | 1.1024     |   0.0 | 65.62
Bond    | 0.00062656 | 0.00062656 | 0.00062656 |   0.0 |  0.04
Neigh   | 0.40537    | 0.40537    | 0.40537    |   0.0 | 24.13
Comm    | 0.067369   | 0.067369   | 0.067369   |   0.0 |  4.01
Output  | 0.040565   | 0.040565   | 0.040565   |   0.0 |  2.41
Modify  | 0.051896   | 0.051896   | 0.051896   |   0.0 |  3.09
Other   |            | 0.01168    |            |       |  0.70

Nlocal:    512 ave 512 max 512 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost:    1398 ave 1398 max 1398 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs:    12036 ave 12036 max 12036 min
Histogram: 1 0 0 0 0 0 0 0 0 0

Total # of neighbors = 12036
Ave neighs/atom = 23.5078
Ave special neighs/atom = 0
Neighbor list builds = 800
Dangerous builds = 800
Total wall time: 0:00:02
[3]:
%matplotlib inline

import freud
from matplotlib import pyplot as plt
import numpy as np
import warnings
[4]:
with warnings.catch_warnings():
    warnings.simplefilter('ignore')
    # We read the number of particles, the system box, and the
    # particle positions into 3 separate arrays.
    N = int(np.genfromtxt(
        'output_custom.xyz', skip_header=3, max_rows=1))
    box_data = np.genfromtxt(
        'output_custom.xyz', skip_header=5, max_rows=3)
    data = np.genfromtxt(
        'output_custom.xyz', skip_header=9,
        invalid_raise=False)

# Remove the unwanted text rows
data = data[~np.isnan(data).all(axis=1)].reshape(-1, N, 6)

box = freud.box.Box.from_box(
    box_data[:, 1] - box_data[:, 0])

# We shift the system by half the box lengths to match the
# freud coordinate system, which is centered at the origin.
# Since all methods support periodicity, this shift is simply
# for consistency but does not affect any analyses.
data[..., :3] -= box.L/2
rdf = freud.density.RDF(rmax=4, dr=0.03, rmin=1)
for frame in data:
    rdf.accumulate(box, frame[:, :3])

msd = freud.msd.MSD(box)
msd.compute(positions=data[:, :, :3], images=data[:, :, 3:])

# Plot the RDF
plt.plot(rdf.R, rdf.RDF)
plt.title('Radial Distribution Function')
plt.xlabel('$r$')
plt.ylabel('$g(r)$')
plt.show()

# Plot the MSD
plt.plot(msd.msd)
plt.title('Mean Squared Displacement')
plt.xlabel('$t$')
plt.ylabel('MSD$(t)$')
plt.show()
_images/examples_examples_LAMMPS-LJ-MSD_LAMMPS-LJ-MSD_5_0.png
_images/examples_examples_LAMMPS-LJ-MSD_LAMMPS-LJ-MSD_5_1.png
Using Machine Learning for Structural Identification

This notebook provides a demonstration of how a simple set of descriptors computed by freud can be coupled with machine learning for structural identification. The set of descriptors used here are not enough to identify complex crystal structures, but this notebook provides an introduction. For a more powerful set of descriptors, see the paper Machine learning for crystal identification and discovery (Spellings 2018) and the library pythia, both of which use freud for their computations.

[1]:
import freud
import matplotlib.pyplot as plt
import matplotlib.cm
import numpy as np
import pandas as pd
import util

We generate sample body-centered cubic, face-centered cubic, and simple cubic structures. Each structure has at least 4000 particles.

[2]:
N = 4000
noise = 0.1
structures = {}
n = round((N/2)**(1/3))
structures['bcc'] = util.make_bcc(nx=n, ny=n, nz=n, noise=noise)
n = round((N/4)**(1/3))
structures['fcc'] = util.make_fcc(nx=n, ny=n, nz=n, noise=noise)
n = round((N/1)**(1/3))
structures['sc'] = util.make_sc(nx=n, ny=n, nz=n, noise=noise)
for name, (box, positions) in structures.items():
    print(name, 'has', len(positions), 'particles.')
bcc has 4394 particles.
fcc has 4000 particles.
sc has 4096 particles.

Next, we compute the Steinhardt order parameters \(Q_l\) for \(l \in \{4, 6, 8, 10, 12\}\).

We use the Voronoi neighbor list, removing neighbors whose Voronoi facets are small.

[3]:
def get_features(box, positions, structure):
    voro = freud.voronoi.Voronoi(box, buff=max(box.L)/2)
    voro.computeNeighbors(positions)
    nlist = voro.nlist
    nlist.filter(nlist.weights > 0.1)
    features = {}
    for l in [4, 6, 8, 10, 12]:
        ql = freud.order.LocalQl(box, rmax=max(box.L)/2, l=l)
        ql.compute(positions, nlist)
        Ql = ql.Ql.copy()
        features['q{}'.format(l)] = Ql

    return features
[4]:
structure_features = {}
for name, (box, positions) in structures.items():
    structure_features[name] = get_features(box, positions, name)

Here, we plot a histogram of the \(Q_4\) and \(Q_6\) values for each structure.

[5]:
for l in [4, 6]:
    plt.figure(figsize=(3, 2), dpi=300)
    for name in structures.keys():
        plt.hist(structure_features[name]['q{}'.format(l)], range=(0, 1), bins=100, label=name, alpha=0.7)
    plt.title(r'$Q_{{{l}}}$'.format(l=l))
    plt.legend()
    for lh in plt.legend().legendHandles:
        lh.set_alpha(1)
    plt.show()
_images/examples_examples_Using_Machine_Learning_for_Structural_Identification_8_0.png
_images/examples_examples_Using_Machine_Learning_for_Structural_Identification_8_1.png

Next, we will train a Support Vector Machine to predict particles’ structures based on these Steinhardt \(Q_l\) descriptors. We build pandas data frames to hold the structure features, encoding the structure as an integer. We use train_test_split to train on part of the data and test the model on a separate part of the data.

[6]:
structure_dfs = {}
for i, structure in enumerate(structure_features):
    df = pd.DataFrame.from_dict(structure_features[structure])
    df['class'] = i
    structure_dfs[structure] = df
[7]:
df = pd.concat(structure_dfs.values()).reset_index(drop=True)
[8]:
from sklearn.preprocessing import normalize
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
[9]:
X = df.drop('class', axis=1).values
X = normalize(X)
y = df['class'].values
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.33, random_state=42)

svm = SVC()
svm.fit(X_train, y_train)
print('Score:', svm.score(X_test, y_test))
Score: 0.9830179524502669

Finally, we use the Uniform Manifold Approximation and Projection method (McInnes 2018, GitHub repo) to project the high-dimensional descriptors into a two-dimensional plot. Notice that some bcc particles overlap with fcc particles. This can be expected from the noise that was added to the structures. The particles that were incorrectly classified by the SVM above are probably located in this overlapping region.

[10]:
from umap import UMAP
umap = UMAP(random_state=42)

X_reduced = umap.fit_transform(X)
[11]:
plt.figure(figsize=(4, 3), dpi=300)
for i in range(max(y) + 1):
    indices = np.where(y == i)[0]
    plt.scatter(X_reduced[indices, 0], X_reduced[indices, 1],
                color=matplotlib.cm.tab10(i), s=8, alpha=0.2,
                label=list(structure_features.keys())[i])
plt.legend()
for lh in plt.legend().legendHandles:
    lh.set_alpha(1)
plt.show()
_images/examples_examples_Using_Machine_Learning_for_Structural_Identification_16_0.png
Calculating Strain via Voxelization

This notebook shows how to use freud’s nearest neighbor module to create a voxelized version of a system.

In brief, we are going to create a set of points that define the centers of our voxels, then assign all particles to one of these voxels. Then we sum up some property of the particles amongst all particles in a bin.

At the end we want to have a sampling of some particle property in our system on a regular grid (as a NumPy array).

[1]:
import freud
import numpy as np
import matplotlib.pyplot as plt
import re
from scipy.sparse import csr_matrix, csc_matrix
%matplotlib inline
from ipywidgets import FloatProgress
from IPython.display import display

This uses data from some text files that were output from the visualization software OVITO (https://ovito.org/)

The files have a header with box information, and then a list of particle info. These files have 10 fields per particle:

(ID#, position(x,y,z), strains(xx,yy,zz,yz,xz,xy))

The goal is to turn this into an \((N_x, N_y, N_z, 3, 3)\) NumPy array, where \(N_x, N_y, N_z\) are the number of bins in each dimension, and each of those bins has an averaged 3x3 strain array.

First we read in the box info from our text files and construct an average box. We need this so we can make our bin centers

[2]:
framefiles = ['data/strain_data/frame{f}'.format(f=f) for f in [100, 110, 120, 130]]

# read all the boxes, so we can make the grid points for voxelizing
boxes = []
for f in framefiles:
    ff = open(f, 'r')
    _ = ff.readline()
    header = ff.readline()

    match = re.match('^Lattice=".*"', header)
    boxstring = match.group(0)
    boxes.append(np.array(str.split(boxstring[9:-1]), dtype=np.float).reshape((3,3)).T)
    ff.close()

# find the average box
ave_box = np.array(boxes).mean(axis=0)

Now we make the bin centers using np.meshgrid, but append and combine the X, Y, and Z coordinates into an array of shape \((N_x N_y N_z, 3)\) to pass to freud.

[3]:
res = (60, 10, 45) # The number of bins (in x,y,z)
xx = np.linspace(-ave_box[0,0]/2,ave_box[0,0]/2,num=res[0])
yy = np.linspace(-ave_box[1,1]/2,ave_box[1,1]/2,num=res[1])
zz = np.linspace(-ave_box[2,2]/2,ave_box[2,2]/2,num=res[2])
XX, YY, ZZ = np.meshgrid(xx,yy,zz)

XYZ = np.append(np.append(XX.flatten().reshape((-1,1)),
                          YY.flatten().reshape((-1,1)), axis=1),
                ZZ.flatten().reshape((-1,1)), axis=1).astype(np.float32)

Now we iterate over our files and compute the first nearest neighbor (among the bin centers) of the particles, so we know which bin to average them in.

It is important to use scipy’s csr_matrix for this process when the number of particles is large. These files contain >80,000 particles, and without the sparse matrix, the dot product to determine grid totals would be extremely slow.

[4]:
master_strains = np.zeros((XYZ.shape[0], 6)) # matrix to sum into

for i in range(len(framefiles)):
    data = np.loadtxt(framefiles[i], skiprows=2).astype(np.float32)

    box = freud.box.Box(Lx=boxes[i][0,0],
                        Ly=boxes[i][1,1],
                        Lz=boxes[i][2,2],
                        yz=boxes[i][1,2],
                        xz=boxes[i][0,2],
                        xy=boxes[i][0,1])

    nn = freud.locality.NearestNeighbors(rmax=np.amax([ave_box[0,0]/res[0],ave_box[1,1]/res[1],ave_box[2,2]/res[2]]),
                                        n_neigh=1)

    nn.compute(box=box, ref_points=data[:,1:4], points=XYZ)
    n_list = nn.getNeighborList()

    sprse = csr_matrix((np.ones(n_list.shape[0])
                        ,(n_list.flatten(), np.arange(n_list.shape[0]))),
                       shape=(XYZ.shape[0], n_list.shape[0]))

    # strain data
    sdata = data[:,4:]
    binned = np.zeros((XYZ.shape[0],6))
    # number of particles in each bin
    grid_totals = sprse.dot(np.ones(n_list.shape[0]))
    grid_totals[grid_totals==0] = 1 # get rid of division errors

    for j in range(6):
        binned[:,j] = sprse.dot(sdata[:,j])/grid_totals

    master_strains = master_strains + binned

master_strains = master_strains/len(framefiles) # divide by number of frames

Now we pack up the resulting array into the shape we want it to be: \((N_x, N_y, N_z, 3, 3)\)

[5]:
final_matrix = np.zeros((res[1],res[0],res[2],3,3))

# this mapping turns 6 strain values into a symmetric (3,3) matrix
voigt_map = {0:(0,0), 1:(1,1), 2:(2,2), 3:(1,2), 4:(0,2), 5:(0,1)}

for i in range(6):
    v = voigt_map[i]
    final_matrix[:,:,:,v[0],v[1]] = master_strains[:,i].reshape(res[1],res[0],res[2])
    if v[0]!=v[1]:
        final_matrix[:,:,:,v[1],v[0]] = master_strains[:,i].reshape(res[1],res[0],res[2])

Since we are only using four frames, the distribution is not very well sampled. But we can get a clue that a distinct distribution of strain is emerging if we average along the first axis of the matrix (this particular system should not vary in that direction)

[6]:
plt.figure(figsize=(10,10))
plt.imshow(final_matrix[:,:,:,0,0].mean(axis=0),
           origin='lower', cmap=plt.cm.bwr,
           vmin=-0.04, vmax=0.04, interpolation='none')
plt.colorbar()
plt.show()
_images/examples_examples_Calculating_Strain_via_Voxelization_12_0.png
Visualizing analyses with fresnel

In this notebook, we simulate a system of tetrahedra, color particles according to their local density, and path-trace the resulting image with fresnel.

The cell below runs a short HOOMD-blue simulation of tetrahedra using Hard Particle Monte Carlo (HPMC).

[1]:
import hoomd
import hoomd.hpmc
hoomd.context.initialize('')

# Create an 8x8x8 simple cubic lattice
system = hoomd.init.create_lattice(
    unitcell=hoomd.lattice.sc(a=1.5), n=8)

# Create our tetrahedra and configure the HPMC integrator
mc = hoomd.hpmc.integrate.convex_polyhedron(seed=42)
mc.set_params(d=0.2, a=0.1)
vertices = [( 0.5, 0.5, 0.5),
            (-0.5,-0.5, 0.5),
            (-0.5, 0.5,-0.5),
            ( 0.5,-0.5,-0.5)]
mc.shape_param.set('A', vertices=vertices)

# Run for 5,000 steps
hoomd.run(5e3)
snap = system.take_snapshot()
HOOMD-blue v2.6.0-7-g60513d253 DOUBLE HPMC_MIXED MPI TBB SSE SSE2 SSE3 SSE4_1 SSE4_2 AVX AVX2
Compiled: 06/13/2019
Copyright (c) 2009-2019 The Regents of the University of Michigan.
-----
You are using HOOMD-blue. Please cite the following:
* J A Anderson, C D Lorenz, and A Travesset. "General purpose molecular dynamics
  simulations fully implemented on graphics processing units", Journal of
  Computational Physics 227 (2008) 5342--5359
* J Glaser, T D Nguyen, J A Anderson, P Liu, F Spiga, J A Millan, D C Morse, and
  S C Glotzer. "Strong scaling of general-purpose molecular dynamics simulations
  on GPUs", Computer Physics Communications 192 (2015) 97--107
-----
-----
You are using HPMC. Please cite the following:
* J A Anderson, M E Irrgang, and S C Glotzer. "Scalable Metropolis Monte Carlo
  for simulation of hard shapes", Computer Physics Communications 204 (2016) 21
  --30
-----
HOOMD-blue is running on the CPU
notice(2): Group "all" created containing 512 particles
** starting run **
Time 00:00:10 | Step 2053 / 5000 | TPS 205.218 | ETA 00:00:14
Time 00:00:20 | Step 4159 / 5000 | TPS 210.543 | ETA 00:00:03
Time 00:00:24 | Step 5000 / 5000 | TPS 210.467 | ETA 00:00:00
Average TPS: 208.306
---------
notice(2): -- HPMC stats:
notice(2): Average translate acceptance: 0.749166
notice(2): Average rotate acceptance:    0.867601
notice(2): Trial moves per second:        426589
notice(2): Overlap checks per second:     3.06914e+07
notice(2): Overlap checks per trial move: 71.946
notice(2): Number of overlap errors:      0
** run complete **

Now we import the modules needed for analysis and visualization.

[2]:
import fresnel
import freud
import matplotlib.cm
from matplotlib.colors import Normalize
import numpy as np
device = fresnel.Device()

Next, we’ll set up the arrays needed for the scene and its geometry. This includes the analysis used for coloring particles.

[3]:
poly_info = fresnel.util.convex_polyhedron_from_vertices(vertices)
positions = snap.particles.position
orientations = snap.particles.orientation
box = freud.box.Box.from_box(snap.box)
ld = freud.density.LocalDensity(3.0, 1.0, 1.0)
ld.compute(box, positions)
colors = matplotlib.cm.viridis(Normalize()(ld.density))
box_points = np.asarray([
    box.makeCoordinates(
        [[0, 0, 0], [0, 0, 0], [0, 0, 0], [1, 1, 0],
         [1, 1, 0], [1, 1, 0], [0, 1, 1], [0, 1, 1],
         [0, 1, 1], [1, 0, 1], [1, 0, 1], [1, 0, 1]]),
    box.makeCoordinates(
        [[1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 0, 0],
         [0, 1, 0], [1, 1, 1], [1, 1, 1], [0, 1, 0],
         [0, 0, 1], [0, 0, 1], [1, 1, 1], [1, 0, 0]])])

This cell creates the scene and geometry objects to be rendered by fresnel.

[4]:
scene = fresnel.Scene(device)
geometry = fresnel.geometry.ConvexPolyhedron(
    scene, poly_info,
    position=positions,
    orientation=orientations,
    color=fresnel.color.linear(colors))
geometry.material = fresnel.material.Material(
    color=fresnel.color.linear([0.25, 0.5, 0.9]),
    roughness=0.8, primitive_color_mix=1.0)
geometry.outline_width = 0.05
box_geometry = fresnel.geometry.Cylinder(
    scene, points=box_points.swapaxes(0, 1))
box_geometry.radius[:] = 0.1
box_geometry.color[:] = np.tile([0, 0, 0], (12, 2, 1))
box_geometry.material.primitive_color_mix = 1.0
scene.camera = fresnel.camera.fit(scene, view='isometric', margin=0.1)

First, we preview the scene. (This doesn’t use path tracing, and is much faster.)

[5]:
fresnel.preview(scene, aa_level=3, w=600, h=600)
[5]:
_images/examples_examples_Visualization_with_fresnel_10_0.png

Finally, we use path tracing for a high quality image. The number of light samples can be increased to reduce path tracing noise.

[6]:
fresnel.pathtrace(scene, light_samples=16, w=600, h=600)
[6]:
_images/examples_examples_Visualization_with_fresnel_12_0.png
Visualization with plato

In this notebook, we run a Lennard-Jones simulation, color particles according to their local density computed with freud, and display the results with plato. Note that plato has multiple backends – see the plato documentation for information about each backend and the features it supports.

[1]:
import hoomd
import hoomd.md
hoomd.context.initialize('')

# Silence the HOOMD output
hoomd.util.quiet_status()
hoomd.option.set_notice_level(0)

# Create a 10x10x10 simple cubic lattice of particles with type name A
system = hoomd.init.create_lattice(unitcell=hoomd.lattice.sc(a=1.5, type_name='A'), n=10)

# Specify Lennard-Jones interactions between particle pairs
nl = hoomd.md.nlist.cell()
lj = hoomd.md.pair.lj(r_cut=3.0, nlist=nl)
lj.pair_coeff.set('A', 'A', epsilon=1.0, sigma=1.0)

# Integrate at constant temperature
hoomd.md.integrate.mode_standard(dt=0.005)
integrator = hoomd.md.integrate.nvt(group=hoomd.group.all(), kT=0.01, tau=0.5)
integrator.randomize_velocities(seed=42)

# Run for 10,000 time steps
hoomd.run(10e3)
snap = system.take_snapshot()
HOOMD-blue v2.6.0-7-g60513d253 DOUBLE HPMC_MIXED MPI TBB SSE SSE2 SSE3 SSE4_1 SSE4_2 AVX AVX2
Compiled: 06/13/2019
Copyright (c) 2009-2019 The Regents of the University of Michigan.
-----
You are using HOOMD-blue. Please cite the following:
* J A Anderson, C D Lorenz, and A Travesset. "General purpose molecular dynamics
  simulations fully implemented on graphics processing units", Journal of
  Computational Physics 227 (2008) 5342--5359
* J Glaser, T D Nguyen, J A Anderson, P Liu, F Spiga, J A Millan, D C Morse, and
  S C Glotzer. "Strong scaling of general-purpose molecular dynamics simulations
  on GPUs", Computer Physics Communications 192 (2015) 97--107
-----
HOOMD-blue is running on the CPU

Now we import the modules needed for visualization.

[2]:
import freud
import matplotlib.cm
from matplotlib.colors import Normalize
import numpy as np
import plato
# For interactive scenes, use:
import plato.draw.pythreejs as draw
# For static scenes, use:
#import plato.draw.fresnel as draw

This code sets up the plato Scene object with the particles and colors computed above.

[3]:
positions = snap.particles.position
box = freud.box.Box.from_box(snap.box)
ld = freud.density.LocalDensity(3.0, 1.0, 1.0)
ld.compute(box, positions)
colors = matplotlib.cm.viridis(Normalize()(ld.density))
radii = np.ones(len(positions)) * 0.5
box_prim = draw.Lines(
    start_points=box.makeCoordinates(
        [[0, 0, 0], [0, 0, 0], [0, 0, 0], [1, 1, 0],
         [1, 1, 0], [1, 1, 0], [0, 1, 1], [0, 1, 1],
         [0, 1, 1], [1, 0, 1], [1, 0, 1], [1, 0, 1]]),
    end_points=box.makeCoordinates(
        [[1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 0, 0],
         [0, 1, 0], [1, 1, 1], [1, 1, 1], [0, 1, 0],
         [0, 0, 1], [0, 0, 1], [1, 1, 1], [1, 0, 0]]),
    widths=0.2,
    colors=[[0, 0, 0, 1]]*12,
)
sphere_prim = draw.Spheres(
    positions=snap.particles.position,
    radii=radii,
    colors=colors,
    vertex_count=32)
scene = draw.Scene((sphere_prim, box_prim), zoom=1.5)

Click and drag the 3D scene below - it’s interactive!

[4]:
scene.show()
Visualizing 3D Voronoi and Voxelization

The plato-draw package allows for visualizing particle data in 2D and 3D using a variety of backend libraries. Here, we show a 3D Voronoi diagram drawn using fresnel and pythreejs. We use rowan to generate the view rotation.

To install dependencies:

  • conda install -c conda-forge fresnel

  • pip install plato-draw rowan

[1]:
import freud
import matplotlib.cm
import numpy as np
import rowan
from util import make_fcc
import plato.draw.fresnel
backend = plato.draw.fresnel
# For interactive scenes:
# import plato.draw.pythreejs
# backend = plato.draw.pythreejs
[2]:
def plot_crystal(box, positions, colors=None, radii=None, backend=None,
                 polytopes=[], polytope_colors=None):
    if backend is None:
        backend = plato.draw.fresnel
    if colors is None:
        colors = np.array([[0.5, 0.5, 0.5, 1]] * len(positions))
    if radii is None:
        radii = np.array([0.5] * len(positions))
    sphere_prim = backend.Spheres(positions=positions, colors=colors, radii=radii)
    box_prim = backend.Lines(
        start_points=box.makeCoordinates(
            [[0, 0, 0], [0, 0, 0], [0, 0, 0], [1, 1, 0],
             [1, 1, 0], [1, 1, 0], [0, 1, 1], [0, 1, 1],
             [0, 1, 1], [1, 0, 1], [1, 0, 1], [1, 0, 1]]),
        end_points=box.makeCoordinates(
            [[1, 0, 0], [0, 1, 0], [0, 0, 1], [1, 0, 0],
             [0, 1, 0], [1, 1, 1], [1, 1, 1], [0, 1, 0],
             [0, 0, 1], [0, 0, 1], [1, 1, 1], [1, 0, 0]]),
        widths=0.1,
        colors=[0, 0, 0, 1])
    if polytope_colors is None:
        polytope_colors = colors * np.array([1, 1, 1, 0.4])
    polytope_prims = []
    for p, c in zip(polytopes, polytope_colors):
        p_prim = backend.ConvexPolyhedra(
            positions=[[0, 0, 0]], colors=c, vertices=p, outline=0)
        polytope_prims.append(p_prim)
    rotation = rowan.multiply(
        rowan.from_axis_angle([1, 0, 0], np.pi/10),
        rowan.from_axis_angle([0, 1, 0], -np.pi/10))
    scene = backend.Scene([sphere_prim, box_prim, *polytope_prims],
                          zoom=3, rotation=rotation)
    if backend is not plato.draw.fresnel:
        scene.enable('directional_light')
    #else:
    #    scene.enable('antialiasing')
    scene.show()

We generate an fcc structure and add Gaussian noise to the positions. Colors are assigned randomly.

[3]:
np.random.seed(12)
box, positions = make_fcc(nx=2, ny=2, nz=2, scale=1.5, noise=0.1)
positions = box.wrap(positions)
cmap = matplotlib.cm.get_cmap('tab20')
colors = cmap(np.random.rand(len(positions)))
[4]:
plot_crystal(box, positions, colors, backend=backend)
_images/examples_examples_Visualizing_3D_Voronoi_and_Voxelization_5_0.png

We make a Voronoi tesselation of the system and plot it in 3D. The Voronoi cells are approximately rhombic dodecahedra, which tesselate 3D space in a face-centered cubic lattice.

[5]:
voro = freud.voronoi.Voronoi(box)
voro.compute(positions, buff=np.max(box.L)/2)
plot_crystal(box, positions, colors=colors,
             backend=backend, polytopes=voro.polytopes)
_images/examples_examples_Visualizing_3D_Voronoi_and_Voxelization_7_0.png

We generate a voxelization of this space by creating a dense lattice of points on a simple cubic lattice.

[6]:
def make_cubic_grid(box, voxels_per_side):
    v_space = np.linspace(0, 1, voxels_per_side+1)
    v_space = (v_space[:-1] + v_space[1:])/2  # gets centers of the voxels
    return np.array([box.makeCoordinates([x, y, z])
                     for x in v_space for y in v_space for z in v_space])
[7]:
voxels_per_side = 30
cubic_grid = make_cubic_grid(box, voxels_per_side)

# Make the spheres overlap just a bit
radii = np.ones(len(cubic_grid)) * 0.8 * np.max(box.L) / voxels_per_side

plot_crystal(box, cubic_grid, radii=radii, backend=backend)
_images/examples_examples_Visualizing_3D_Voronoi_and_Voxelization_10_0.png

We color the voxels by their first nearest neighbor. This is mathematically equivalent to being inside the corresponding Voronoi cell. Here, we get the neighbor indices (this can be used to separate the Voronoi cells into voxels).

[8]:
nn = freud.locality.NearestNeighbors(rmax=1, n_neigh=1)
nn.compute(box, cubic_grid, positions)
voxel_neighbors = -np.ones(len(cubic_grid), dtype=np.int)
for i, j in zip(nn.nlist.index_i, nn.nlist.index_j):
    voxel_neighbors[i] = j

Next, we use these indices to color and draw the voxelization.

[9]:
voxel_colors = np.array([colors[i] for i in voxel_neighbors])
plot_crystal(box, cubic_grid, colors=voxel_colors,
             radii=radii, backend=backend)
_images/examples_examples_Visualizing_3D_Voronoi_and_Voxelization_14_0.png

Benchmarks

Performance is a central consideration for freud. Below are some benchmarks comparing freud to other tools offering similar analysis methods.

Benchmarking Neighbor Finding against scipy

The neighbor finding algorithms in freud are highly efficient and rely on parallelized C++ code. Below, we show a benchmark of freud’s AABBQuery and LinkCell algorithms against the scipy.spatial.cKDTree. This benchmark was run on an Intel(R) Core(TM) i3-8100B CPU @ 3.60GHz.

[1]:
import freud
import scipy
import numpy as np
import matplotlib.pyplot as plt
import timeit
from tqdm import tqdm
[2]:
def make_scaled_system(N, Nneigh=12):
    L = (4 / 3 * np.pi * N / Nneigh)**(1/3)
    box = freud.box.Box.cube(L)
    seed = 0
    np.random.seed(seed)
    points = np.random.uniform(-L/2, L/2, (N, 3))
    return box, points

box, points = make_scaled_system(1000)
Timing Functions
[3]:
def time_statement(stmt, repeat=5, number=100, **kwargs):
    timer = timeit.Timer(stmt=stmt, globals=kwargs)
    times = timer.repeat(repeat, number)
    return np.mean(times), np.std(times)
[4]:
def time_freud_lc(box, points):
    return time_statement("lc = freud.locality.LinkCell(box, rcut);"
                          "lc.compute(box, points, exclude_ii=False)",
                          freud=freud, box=box, points=points, rcut=1.0)
[5]:
def time_freud_abq(box, points):
    return time_statement("aq = freud.locality.AABBQuery(box, points);"
                          "aq.queryBall(points, rcut, exclude_ii=False).toNList()",
                          freud=freud, box=box, points=points, rcut=1.0)
[6]:
def time_scipy_ckdtree(box, points):
    shifted_points = points + np.asarray(box.L)/2
    # SciPy only supports cubic boxes
    assert box.Lx == box.Ly == box.Lz
    assert box.xy == box.xz == box.yz == 0
    return time_statement("kdtree = scipy.spatial.cKDTree(points, boxsize=L);"
                          "kdtree.query_ball_tree(kdtree, r=rcut)",
                          scipy=scipy, points=shifted_points, L=box.Lx, rcut=1.0)
[7]:
# Test timing functions
lc_t = time_freud_lc(box, points)
print(lc_t)
abq_t = time_freud_abq(box, points)
print(abq_t)
kd_t = time_scipy_ckdtree(box, points)
print(kd_t)
(0.118918436, 0.003592257320728501)
(0.11153316980000012, 0.002647679687358477)
(0.48859084159999994, 0.0031191134541460257)
Perform Measurements
[8]:
def measure_runtime_scaling_N(Ns, rcut=1.0):
    result_times = []
    for N in tqdm(Ns):
        box, points = make_scaled_system(N)
        result_times.append((
            time_scipy_ckdtree(box, points),
            time_freud_abq(box, points),
            time_freud_lc(box, points)))
    return np.asarray(result_times)
[9]:
def plot_result_times(result_times, Ns):
    plt.figure(figsize=(6, 4), dpi=200)
    plt.errorbar(Ns, result_times[:, 0, 0], result_times[:, 0, 1], label="scipy v{} cKDTree".format(scipy.__version__))
    plt.errorbar(Ns, result_times[:, 1, 0], result_times[:, 1, 1], label="freud v{} AABBQuery".format(freud.__version__))
    plt.errorbar(Ns, result_times[:, 2, 0], result_times[:, 2, 1], label="freud v{} LinkCell".format(freud.__version__))
    plt.title(r'Neighbor finding for 12 average neighbors')
    plt.xlabel(r'Number of points $N$')
    plt.ylabel(r'Runtime for 100 iterations (s)')
    plt.legend()
    plt.show()
[10]:
# Use geometrically-spaced values of N, rounded to one significant figure
Ns = list(sorted(set(map(
    lambda x: int(round(x, -int(np.floor(np.log10(np.abs(x)))))),
    np.exp(np.linspace(np.log(50), np.log(5000), 10))))))
[11]:
result_times = measure_runtime_scaling_N(Ns)
plot_result_times(result_times, Ns)
100%|██████████| 10/10 [00:41<00:00,  8.46s/it]
_images/examples_examples_Benchmarking_Neighbor_Finding_against_scipy_13_1.png
Benchmarking RDF against MDAnalysis

The algorithms in freud are highly efficient and rely on parallelized C++ code. Below, we show a benchmark of freud.density.RDF against MDAnalysis.analysis.rdf. This benchmark was run on an Intel(R) Core(TM) i3-8100B CPU @ 3.60GHz.

[1]:
import freud
import gsd
import MDAnalysis
import MDAnalysis.analysis.rdf
import multiprocessing as mp
import numpy as np
import matplotlib.pyplot as plt
import timeit
from tqdm import tqdm
[2]:
trajectory_filename = 'data/rdf_benchmark.gsd'
rmax = 5
rmin = 0.1
nbins = 75
[3]:
trajectory = MDAnalysis.coordinates.GSD.GSDReader(trajectory_filename)
frame = trajectory[0]
topology = MDAnalysis.core.topology.Topology(n_atoms=frame.n_atoms)
u = MDAnalysis.as_Universe(topology, trajectory_filename)

rdf = MDAnalysis.analysis.rdf.InterRDF(g1=u.atoms, g2=u.atoms,
                                       nbins=nbins,
                                       range=(rmin, rmax)).run()
[4]:
plt.plot(rdf.bins, rdf.rdf)
plt.show()
_images/examples_examples_Benchmarking_RDF_against_MDAnalysis_4_0.png
[5]:
with gsd.hoomd.open(trajectory_filename, 'rb') as traj:
    freud_rdf = freud.density.RDF(rmax=rmax, dr=(rmax-rmin)/nbins, rmin=rmin)
    for frame in traj:
        freud_rdf.accumulate(frame.configuration.box, frame.particles.position)
[6]:
plt.plot(freud_rdf.R, freud_rdf.RDF)
plt.show()
_images/examples_examples_Benchmarking_RDF_against_MDAnalysis_6_0.png
Timing Functions
[7]:
def time_statement(stmt, repeat=3, number=1, **kwargs):
    timer = timeit.Timer(stmt=stmt, globals=kwargs)
    times = timer.repeat(repeat, number)
    return np.mean(times), np.std(times)
[8]:
def time_freud_rdf(trajectory_filename, rmax, rmin, nbins):
    code = """
rdf = freud.density.RDF(rmax=rmax, dr=(rmax-rmin)/nbins, rmin=rmin)
for frame in trajectory:
    rdf.accumulate(frame.configuration.box, frame.particles.position)"""
    with gsd.hoomd.open(trajectory_filename, 'rb') as trajectory:
        return time_statement(code, freud=freud, trajectory=trajectory, rmax=rmax, rmin=rmin, nbins=nbins)
[9]:
def time_mdanalysis_rdf(trajectory_filename, rmax, rmin, nbins):
    trajectory = MDAnalysis.coordinates.GSD.GSDReader(trajectory_filename)
    frame = trajectory[0]
    topology = MDAnalysis.core.topology.Topology(n_atoms=frame.n_atoms)
    u = MDAnalysis.as_Universe(topology, trajectory_filename)
    code = """rdf = MDAnalysis.analysis.rdf.InterRDF(g1=u.atoms, g2=u.atoms, nbins=nbins, range=(rmin, rmax)).run()"""
    return time_statement(code, MDAnalysis=MDAnalysis, u=u, rmax=rmax, rmin=rmin, nbins=nbins)
[10]:
# Test timing functions
params = dict(
    trajectory_filename=trajectory_filename,
    rmax=rmax,
    rmin=rmin,
    nbins=nbins)

def system_size(trajectory_filename, **kwargs):
    with gsd.hoomd.open(params['trajectory_filename'], 'rb') as trajectory:
        return {'frames': len(trajectory),
                'particles': len(trajectory[0].particles.position)}

print(system_size(**params))
freud_rdf_runtime = time_freud_rdf(**params)
print('freud:', freud_rdf_runtime)
mdanalysis_rdf_runtime = time_mdanalysis_rdf(**params)
print('MDAnalysis:', mdanalysis_rdf_runtime)
{'frames': 5, 'particles': 15625}
freud: (2.8377244066666663, 0.016533837168724797)
MDAnalysis: (18.35462979233333, 0.058068162708303186)
Perform Measurements
[11]:
def measure_runtime_scaling_rmax(rmaxes, **params):
    result_times = []
    for rmax in tqdm(rmaxes):
        params.update(dict(rmax=rmax))
        freud.parallel.setNumThreads(1)
        freud_single = time_freud_rdf(**params)
        freud.parallel.setNumThreads(0)
        result_times.append((freud_single, time_freud_rdf(**params), time_mdanalysis_rdf(**params)))
    return np.asarray(result_times)
[12]:
def plot_result_times(result_times, rmaxs, frames, particles):
    plt.figure(figsize=(6, 4), dpi=200)
    plt.errorbar(rmaxs, result_times[:, 0, 0], result_times[:, 0, 1], label="freud v{} density.RDF P=1".format(freud.__version__))
    plt.errorbar(rmaxs, result_times[:, 1, 0], result_times[:, 1, 1], label="freud v{} density.RDF P={}".format(freud.__version__, mp.cpu_count()))
    plt.errorbar(rmaxs, result_times[:, 2, 0], result_times[:, 2, 1], label="MDAnalysis v{} analysis.rdf.InterRDF".format(MDAnalysis.__version__))
    plt.title(r'RDF for {} frames, {} particles'.format(frames, particles))
    plt.xlabel(r'RDF $r_{{max}}$')
    plt.ylabel(r'Average Runtime (s)')
    plt.yscale('log')
    plt.legend()
    plt.show()
[13]:
rmaxes = [0.2, 0.3, 0.5, 1, 2, 3]
[14]:
result_times = measure_runtime_scaling_rmax(rmaxes, **params)
plot_result_times(result_times, rmaxes, **system_size(params['trajectory_filename']))
100%|██████████| 6/6 [00:28<00:00,  7.19s/it]
_images/examples_examples_Benchmarking_RDF_against_MDAnalysis_16_1.png
[15]:
print('Speedup, parallel freud / serial freud: {:.3f}x'.format(np.average(result_times[:, 0, 0] / result_times[:, 1, 0])))
print('Speedup, parallel freud / MDAnalysis: {:.3f}x'.format(np.average(result_times[:, 2, 0] / result_times[:, 1, 0])))
print('Speedup, serial freud / MDAnalysis: {:.3f}x'.format(np.average(result_times[:, 2, 0] / result_times[:, 0, 0])))
Speedup, parallel freud / serial freud: 2.575x
Speedup, parallel freud / MDAnalysis: 7.279x
Speedup, serial freud / MDAnalysis: 2.828x

Modules

Below is a list of modules in freud. To add your own module, read the development guide.

Box Module

Overview

freud.box.Box

The freud Box class for simulation boxes.

freud.box.ParticleBuffer

Replicates particles outside the box via periodic images.

Details

The Box class defines the geometry of a simulation box. The module natively supports periodicity by providing the fundamental features for wrapping vectors outside the box back into it. The ParticleBuffer class is used to replicate particles across the periodic boundary to assist analysis methods that do not recognize periodic boundary conditions or extend beyond the limits of one periodicity of the box.

Box
class freud.box.Box(Lx, Ly, Lz, xy, xz, yz, is2D=None)

The freud Box class for simulation boxes.

Module author: Richmond Newman <newmanrs@umich.edu>

Module author: Carl Simon Adorf <csadorf@umich.edu>

Module author: Bradley Dice <bdice@bradleydice.com>

Changed in version 0.7.0: Added box periodicity interface

The Box class is defined according to the conventions of the HOOMD-blue simulation software. For more information, please see:

Parameters
  • Lx (float) – Length of side x.

  • Ly (float) – Length of side y.

  • Lz (float) – Length of side z.

  • xy (float) – Tilt of xy plane.

  • xz (float) – Tilt of xz plane.

  • yz (float) – Tilt of yz plane.

  • is2D (bool) – Specify that this box is 2-dimensional, default is 3-dimensional.

Variables
  • xy (float) – The xy tilt factor.

  • xz (float) – The xz tilt factor.

  • yz (float) – The yz tilt factor.

  • L (\(\left(3\right)\) numpy.ndarray, settable) – The box lengths along x, y, and z.

  • Lx (float, settable) – The x-dimension length.

  • Ly (float, settable) – The y-dimension length.

  • Lz (float, settable) – The z-dimension length.

  • Linv (\(\left(3\right)\) numpy.ndarray) – The inverse box lengths.

  • volume (float) – The box volume (area in 2D).

  • dimensions (int, settable) – The number of dimensions (2 or 3).

  • periodic (\(\left(3\right)\) numpy.ndarray, settable) – Whether or not the box is periodic.

  • periodic_x (bool, settable) – Whether or not the box is periodic in x.

  • periodic_y (bool, settable) – Whether or not the box is periodic in y.

  • periodic_z (bool, settable) – Whether or not the box is periodic in z.

classmethod cube(type cls, L=None)

Construct a cubic box with equal lengths.

Parameters

L (float) – The edge length

classmethod from_box(type cls, box, dimensions=None)

Initialize a Box instance from a box-like object.

Parameters
  • box – A box-like object

  • dimensions (int) – Dimensionality of the box (Default value = None)

Note

Objects that can be converted to freud boxes include lists like [Lx, Ly, Lz, xy, xz, yz], dictionaries with keys 'Lx', 'Ly', 'Lz', 'xy', 'xz', 'yz', 'dimensions', namedtuples with properties Lx, Ly, Lz, xy, xz, yz, dimensions, 3x3 matrices (see from_matrix()), or existing freud.box.Box objects.

If any of Lz, xy, xz, yz are not provided, they will be set to 0.

If all values are provided, a triclinic box will be constructed. If only Lx, Ly, Lz are provided, an orthorhombic box will be constructed. If only Lx, Ly are provided, a rectangular (2D) box will be constructed.

If the optional dimensions argument is given, this will be used as the box dimensionality. Otherwise, the box dimensionality will be detected from the dimensions of the provided box. If no dimensions can be detected, the box will be 2D if Lz == 0, and 3D otherwise.

Returns

The resulting box object.

Return type

freud.box.Box

classmethod from_matrix(type cls, boxMatrix, dimensions=None)

Initialize a Box instance from a box matrix.

For more information and the source for this code, see: http://hoomd-blue.readthedocs.io/en/stable/box.html

Parameters
  • boxMatrix (array-like) – A 3x3 matrix or list of lists

  • dimensions (int) – Number of dimensions (Default value = None)

getImage

Returns the image corresponding to a wrapped vector.

New in version 0.8.

Parameters

vecs (\(\left(3\right)\) or \(\left(N, 3\right)\) numpy.ndarray) – Coordinates of a single vector or array of \(N\) unwrapped vectors.

Returns

Image index vector.

Return type

\(\left(3\right)\) or \(\left(N, 3\right)\) numpy.ndarray

getLatticeVector

Get the lattice vector with index \(i\).

Parameters

i (unsigned int) – Index (\(0 \leq i < d\)) of the lattice vector, where \(d\) is the box dimension (2 or 3).

Returns

Lattice vector with index \(i\).

Return type

\(\left(3\right)\) numpy.ndarray

is2D

Return if box is 2D (True) or 3D (False).

Returns

True if 2D, False if 3D.

Return type

bool

makeCoordinates

Convert fractional coordinates into real coordinates.

Parameters

fractions (\(\left(3\right)\) or \(\left(N, 3\right)\) numpy.ndarray) – Fractional coordinates between 0 and 1 within parallelepipedal box.

Returns

Vectors of real coordinates: \(\left(3\right)\) or \(\left(N, 3\right)\).

Return type

\(\left(3\right)\) or \(\left(N, 3\right)\) numpy.ndarray

makeFraction

Convert real coordinates into fractional coordinates.

Parameters

vecs (\(\left(3\right)\) or \(\left(N, 3\right)\) numpy.ndarray) – Real coordinates within parallelepipedal box.

Returns

Fractional coordinate vectors: \(\left(3\right)\) or \(\left(N, 3\right)\).

Return type

\(\left(3\right)\) or \(\left(N, 3\right)\) numpy.ndarray

classmethod square(type cls, L=None)

Construct a 2-dimensional (square) box with equal lengths.

Parameters

L (float) – The edge length

to_dict

Return box as dictionary.

Returns

Box parameters

Return type

dict

to_matrix

Returns the box matrix (3x3).

Returns

Box matrix

Return type

\(\left(3, 3\right)\) numpy.ndarray

to_tuple

Returns the box as named tuple.

Returns

Box parameters

Return type

namedtuple

unwrap

Unwrap a given array of vectors inside the box back into real space, using an array of image indices that determine how many times to unwrap in each dimension.

Parameters
  • vecs (\(\left(3\right)\) or \(\left(N, 3\right)\) numpy.ndarray) – Single vector or array of \(N\) vectors. The vectors are modified in place.

  • imgs (\(\left(3\right)\) or \(\left(N, 3\right)\) numpy.ndarray) – Single image index or array of \(N\) image indices.

Returns

Vectors unwrapped by the image indices provided.

Return type

\(\left(3\right)\) or \(\left(N, 3\right)\) numpy.ndarray

wrap

Wrap a given array of vectors from real space into the box, using the periodic boundaries.

Note

Since the origin of the box is in the center, wrapping is equivalent to applying the minimum image convention to the input vectors.

Parameters

vecs (\(\left(3\right)\) or \(\left(N, 3\right)\) numpy.ndarray) – Single vector or array of \(N\) vectors. The vectors are altered in place and returned.

Returns

Vectors wrapped into the box.

Return type

\(\left(3\right)\) or \(\left(N, 3\right)\) numpy.ndarray

Particle Buffer
class freud.box.ParticleBuffer(box)

Replicates particles outside the box via periodic images.

Module author: Ben Schultz <baschult@umich.edu>

Module author: Bradley Dice <bdice@bradleydice.com>

New in version 0.11.

Parameters

box (freud.box.Box) – Simulation box.

Variables
  • buffer_particles (\(\left(N_{buffer}, 3\right)\) numpy.ndarray) – The buffer particle positions.

  • buffer_ids (\(\left(N_{buffer}\right)\) numpy.ndarray) – The buffer particle ids.

  • buffer_box (freud.box.Box) – The buffer box, expanded to hold the replicated particles.

compute

Compute the particle buffer.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points used to calculate particle buffer.

  • buffer (float or list of 3 floats) – Buffer distance for replication outside the box.

  • images (bool) – If False (default), buffer is a distance. If True, buffer is a number of images to replicate in each dimension. Note that one image adds half of a box length to each side, meaning that one image doubles the box side lengths, two images triples the box side lengths, and so on.

Cluster Module

Overview

freud.cluster.Cluster

Finds clusters in a set of points.

freud.cluster.ClusterProperties

Routines for computing properties of point clusters.

Details

The freud.cluster module aids in finding and computing the properties of clusters of points in a system.

Cluster
class freud.cluster.Cluster(box, rcut)

Finds clusters in a set of points.

Given a set of coordinates and a cutoff, freud.cluster.Cluster will determine all of the clusters of points that are made up of points that are closer than the cutoff. Clusters are 0-indexed. The class contains an index array, the cluster_idx attribute, which can be used to identify which cluster a particle is associated with: cluster_obj.cluster_idx[i] is the cluster index in which particle i is found. By the definition of a cluster, points that are not within the cutoff of another point end up in their own 1-particle cluster.

Identifying micelles is one primary use-case for finding clusters. This operation is somewhat different, though. In a cluster of points, each and every point belongs to one and only one cluster. However, because a string of points belongs to a polymer, that single polymer may be present in more than one cluster. To handle this situation, an optional layer is presented on top of the cluster_idx array. Given a key value per particle (i.e. the polymer id), the computeClusterMembership function will process cluster_idx with the key values in mind and provide a list of keys that are present in each cluster.

Module author: Joshua Anderson <joaander@umich.edu>

Parameters

Note

2D: freud.cluster.Cluster properly handles 2D boxes. The points must be passed in as [x, y, 0]. Failing to set z=0 will lead to undefined behavior.

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • num_clusters (int) – The number of clusters.

  • num_particles (int) – The number of particles.

  • cluster_idx ((\(N_{particles}\)) numpy.ndarray) – The cluster index for each particle.

  • cluster_keys (list(list)) – A list of lists of the keys contained in each cluster.

computeClusterMembership

Compute the clusters with key membership. Loops over all particles and adds them to a list of sets. Each set contains all the keys that are part of that cluster. Get the computed list with cluster_keys.

Parameters

keys ((\(N_{particles}\)) numpy.ndarray) – Membership keys, one for each particle.

computeClusters

Compute the clusters for the given set of points.

Parameters
  • points ((\(N_{particles}\), 3) np.ndarray) – Particle coordinates.

  • nlist (freud.locality.NeighborList, optional) – Object to use to find bonds (Default value = None).

  • box (freud.box.Box, optional) – Simulation box (Default value = None).

plot

Plot cluster distribution.

Parameters

ax (matplotlib.axes.Axes) – Axis to plot on. If None, make a new figure and axis. (Default value = None)

Returns

Axis with the plot.

Return type

(matplotlib.axes.Axes)

Cluster Properties
class freud.cluster.ClusterProperties(box)

Routines for computing properties of point clusters.

Given a set of points and cluster ids (from Cluster, or another source), ClusterProperties determines the following properties for each cluster:

  • Center of mass

  • Gyration tensor

The computed center of mass for each cluster (properly handling periodic boundary conditions) can be accessed with cluster_COM attribute. The \(3 \times 3\) gyration tensor \(G\) can be accessed with cluster_G attribute. The tensor is symmetric for each cluster.

Module author: Joshua Anderson <joaander@umich.edu>

Parameters

box (freud.box.Box) – Simulation box.

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • num_clusters (int) – The number of clusters.

  • cluster_COM ((\(N_{clusters}\), 3) numpy.ndarray) – The center of mass of the last computed cluster.

  • cluster_G ((\(N_{clusters}\), 3, 3) numpy.ndarray) – The cluster \(G\) tensors computed by the last call to computeProperties().

  • cluster_sizes ((\(N_{clusters}\)) numpy.ndarray) – The cluster sizes computed by the last call to computeProperties().

computeProperties

Compute properties of the point clusters. Loops over all points in the given array and determines the center of mass of the cluster as well as the \(G\) tensor. These can be accessed after the call to computeProperties() with the cluster_COM and cluster_G attributes.

Parameters
  • points ((\(N_{particles}\), 3) np.ndarray) – Positions of the particles making up the clusters.

  • cluster_idx (np.ndarray) – List of cluster indexes for each particle.

  • box (freud.box.Box, optional) – Simulation box (Default value = None).

Density Module

Overview

freud.density.FloatCF

Computes the real pairwise correlation function.

freud.density.ComplexCF

Computes the complex pairwise correlation function.

freud.density.GaussianDensity

Computes the density of a system on a grid.

freud.density.LocalDensity

Computes the local density around a particle.

freud.density.RDF

Computes RDF for supplied data.

Details

The freud.density module contains various classes relating to the density of the system. These functions allow evaluation of particle distributions with respect to other particles.

Correlation Functions
class freud.density.FloatCF(rmax, dr)

Computes the real pairwise correlation function.

The correlation function is given by \(C(r) = \left\langle s_1(0) \cdot s_2(r) \right\rangle\) between two sets of points \(p_1\) (ref_points) and \(p_2\) (points) with associated values \(s_1\) (ref_values) and \(s_2\) (values). Computing the correlation function results in an array of the expected (average) product of all values at a given radial distance \(r\).

The values of \(r\) where the correlation function is computed are controlled by the rmax and dr parameters to the constructor. rmax determines the maximum distance at which to compute the correlation function and dr is the step size for each bin.

Note

2D: freud.density.FloatCF properly handles 2D boxes. The points must be passed in as [x, y, 0]. Failing to set z=0 will lead to undefined behavior.

Note

Self-correlation: It is often the case that we wish to compute the correlation function of a set of points with itself. If points is the same as ref_points, not provided, or None, we omit accumulating the self-correlation value in the first bin.

Module author: Matthew Spellings <mspells@umich.edu>

Parameters
  • rmax (float) – Maximum pointwise distance to include in the calculation.

  • dr (float) – Bin size.

Variables
  • RDF ((\(N_{bins}\)) numpy.ndarray) – Expected (average) product of all values whose radial distance falls within a given distance bin.

  • box (freud.box.Box) – The box used in the calculation.

  • counts ((\(N_{bins}\)) numpy.ndarray) – The number of points in each histogram bin.

  • R ((\(N_{bins}\)) numpy.ndarray) – The centers of each bin.

accumulate

Calculates the correlation function and adds to the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{ref\_points}\), 3) numpy.ndarray) – Reference points used to calculate the correlation function.

  • ref_values ((\(N_{ref\_points}\)) numpy.ndarray) – Real values used to calculate the correlation function.

  • points ((\(N_{points}\), 3) numpy.ndarray, optional) – Points used to calculate the correlation function. Uses ref_points if not provided or None.

  • values ((\(N_{points}\)) numpy.ndarray, optional) – Real values used to calculate the correlation function. Uses ref_values if not provided or None.

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find bonds (Default value = None).

compute

Calculates the correlation function for the given points. Will overwrite the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{ref\_points}\), 3) numpy.ndarray) – Reference points used to calculate the correlation function.

  • ref_values ((\(N_{ref\_points}\)) numpy.ndarray) – Real values used to calculate the correlation function.

  • points ((\(N_{points}\), 3) numpy.ndarray, optional) – Points used to calculate the correlation function. Uses ref_points if not provided or None.

  • values ((\(N_{points}\)) numpy.ndarray, optional) – Real values used to calculate the correlation function. Uses ref_values if not provided or None.

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find bonds (Default value = None).

plot

Plot correlation function.

Parameters

ax (matplotlib.axes.Axes) – Axis to plot on. If None, make a new figure and axis. (Default value = None)

Returns

Axis with the plot.

Return type

(matplotlib.axes.Axes)

reset

Resets the values of the correlation function histogram in memory.

class freud.density.ComplexCF(rmax, dr)

Computes the complex pairwise correlation function.

The correlation function is given by \(C(r) = \left\langle s_1(0) \cdot s_2(r) \right\rangle\) between two sets of points \(p_1\) (ref_points) and \(p_2\) (points) with associated values \(s_1\) (ref_values) and \(s_2\) (values). Computing the correlation function results in an array of the expected (average) product of all values at a given radial distance \(r\).

The values of \(r\) where the correlation function is computed are controlled by the rmax and dr parameters to the constructor. rmax determines the maximum distance at which to compute the correlation function and dr is the step size for each bin.

Note

2D: freud.density.ComplexCF properly handles 2D boxes. The points must be passed in as [x, y, 0]. Failing to set z=0 will lead to undefined behavior.

Note

Self-correlation: It is often the case that we wish to compute the correlation function of a set of points with itself. If points is the same as ref_points, not provided, or None, we omit accumulating the self-correlation value in the first bin.

Module author: Matthew Spellings <mspells@umich.edu>

Parameters
  • rmax (float) – Maximum pointwise distance to include in the calculation.

  • dr (float) – Bin size.

Variables
  • RDF ((\(N_{bins}\)) numpy.ndarray) – Expected (average) product of all values at a given radial distance.

  • box (freud.box.Box) – Box used in the calculation.

  • counts ((\(N_{bins}\)) numpy.ndarray) – The number of points in each histogram bin.

  • R ((\(N_{bins}\)) numpy.ndarray) – The centers of each bin.

accumulate

Calculates the correlation function and adds to the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{ref\_points}\), 3) numpy.ndarray) – Reference points used to calculate the correlation function.

  • ref_values ((\(N_{ref\_points}\)) numpy.ndarray) – Complex values used to calculate the correlation function.

  • points ((\(N_{points}\), 3) numpy.ndarray, optional) – Points used to calculate the correlation function. Uses ref_points if not provided or None.

  • values ((\(N_{points}\)) numpy.ndarray, optional) – Complex values used to calculate the correlation function. Uses ref_values if not provided or None.

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find bonds (Default value = None).

compute

Calculates the correlation function for the given points. Will overwrite the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{ref\_points}\), 3) numpy.ndarray) – Reference points used to calculate the correlation function.

  • ref_values ((\(N_{ref\_points}\)) numpy.ndarray) – Complex values used to calculate the correlation function.

  • points ((\(N_{points}\), 3) numpy.ndarray, optional) – Points used to calculate the correlation function. Uses ref_points if not provided or None.

  • values ((\(N_{points}\)) numpy.ndarray, optional) – Complex values used to calculate the correlation function. Uses ref_values if not provided or None.

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find bonds (Default value = None).

plot

Plot complex correlation function.

Parameters

ax (matplotlib.axes.Axes) – Axis to plot on. If None, make a new figure and axis. (Default value = None)

Returns

Axis with the plot.

Return type

(matplotlib.axes.Axes)

reset

Resets the values of the correlation function histogram in memory.

Gaussian Density
class freud.density.GaussianDensity(*args)

Computes the density of a system on a grid.

Replaces particle positions with a Gaussian blur and calculates the contribution from each to the proscribed grid based upon the distance of the grid cell from the center of the Gaussian. The resulting data is a regular grid of particle densities that can be used in standard algorithms requiring evenly spaced point, such as Fast Fourier Transforms. The dimensions of the image (grid) are set in the constructor, and can either be set equally for all dimensions or for each dimension independently.

  • Constructor Calls:

    Initialize with all dimensions identical:

    freud.density.GaussianDensity(width, r_cut, sigma)
    

    Initialize with each dimension specified:

    freud.density.GaussianDensity(width_x, width_y, width_z, r_cut, sigma)
    

Module author: Joshua Anderson <joaander@umich.edu>

Parameters
  • width (unsigned int) – Number of pixels to make the image.

  • width_x (unsigned int) – Number of pixels to make the image in x.

  • width_y (unsigned int) – Number of pixels to make the image in y.

  • width_z (unsigned int) – Number of pixels to make the image in z.

  • r_cut (float) – Distance over which to blur.

  • sigma (float) – Sigma parameter for Gaussian.

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • gaussian_density ((\(w_x\), \(w_y\), \(w_z\)) numpy.ndarray) – The image grid with the Gaussian density.

compute

Calculates the Gaussian blur for the specified points. Does not accumulate (will overwrite current image).

Parameters
plot

Plot Gaussian Density.

Parameters

ax (matplotlib.axes.Axes) – Axis to plot on. If None, make a new figure and axis. (Default value = None)

Returns

Axis with the plot.

Return type

(matplotlib.axes.Axes)

Local Density
class freud.density.LocalDensity(r_cut, volume, diameter)

Computes the local density around a particle.

The density of the local environment is computed and averaged for a given set of reference points in a sea of data points. Providing the same points calculates them against themselves. Computing the local density results in an array listing the value of the local density around each reference point. Also available is the number of neighbors for each reference point, giving the user the ability to count the number of particles in that region.

The values to compute the local density are set in the constructor. r_cut sets the maximum distance at which data points are included relative to a given reference point. volume is the volume of a single data points, and diameter is the diameter of the circumsphere of an individual data point. Note that the volume and diameter do not affect the reference point; whether or not data points are counted as neighbors of a given reference point is entirely determined by the distance between reference point and data point center relative to r_cut and the diameter of the data point.

In order to provide sufficiently smooth data, data points can be fractionally counted towards the density. Rather than perform compute-intensive area (volume) overlap calculations to determine the exact amount of overlap area (volume), the LocalDensity class performs a simple linear interpolation relative to the centers of the data points. Specifically, a point is counted as one neighbor of a given reference point if it is entirely contained within the r_cut, half of a neighbor if the distance to its center is exactly r_cut, and zero if its center is a distance greater than or equal to r_cut + diameter from the reference point’s center. Graphically, this looks like:

_images/density.png

Note

2D: freud.density.LocalDensity properly handles 2D boxes. The points must be passed in as [x, y, 0]. Failing to set z=0 will lead to undefined behavior.

Module author: Joshua Anderson <joaander@umich.edu>

Parameters
  • r_cut (float) – Maximum distance over which to calculate the density.

  • volume (float) – Volume of a single particle.

  • diameter (float) – Diameter of particle circumsphere.

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • density ((\(N_{ref\_points}\)) numpy.ndarray) – Density of points per ref_point.

  • num_neighbors ((\(N_{ref\_points}\)) numpy.ndarray) – Number of neighbor points for each ref_point.

compute

Calculates the local density for the specified points. Does not accumulate (will overwrite current data).

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{ref\_points}\), 3) numpy.ndarray) – Reference points to calculate the local density.

  • points ((\(N_{points}\), 3) numpy.ndarray, optional) – Points to calculate the local density. Uses ref_points if not provided or None.

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find bonds (Default value = None).

Radial Distribution Function
class freud.density.RDF(rmax, dr, rmin=0)

Computes RDF for supplied data.

The RDF (\(g \left( r \right)\)) is computed and averaged for a given set of reference points in a sea of data points. Providing the same points calculates them against themselves. Computing the RDF results in an RDF array listing the value of the RDF at each given \(r\), listed in the R array.

The values of \(r\) to compute the RDF are set by the values of rmin, rmax, dr in the constructor. rmax sets the maximum distance at which to calculate the \(g \left( r \right)\), rmin sets the minimum distance at which to calculate the \(g \left( r \right)\), and dr determines the step size for each bin.

Module author: Eric Harper <harperic@umich.edu>

Note

2D: freud.density.RDF properly handles 2D boxes. The points must be passed in as [x, y, 0]. Failing to set z=0 will lead to undefined behavior.

Parameters
  • rmax (float) – Maximum interparticle distance to include in the calculation.

  • dr (float) – Distance between histogram bins.

  • rmin (float, optional) – Minimum interparticle distance to include in the calculation (Default value = 0).

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • RDF ((\(N_{bins}\),) numpy.ndarray) – Histogram of RDF values.

  • R ((\(N_{bins}\)) numpy.ndarray) – The centers of each bin.

  • n_r ((\(N_{bins}\),) numpy.ndarray) – Histogram of cumulative RDF values (i.e. the integrated RDF).

Changed in version 0.7.0: Added optional rmin argument.

accumulate

Calculates the RDF and adds to the current RDF histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{ref\_points}\), 3) numpy.ndarray) – Reference points used to calculate the RDF.

  • points ((\(N_{points}\), 3) numpy.ndarray, optional) – Points used to calculate the RDF. Uses ref_points if not provided or None.

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find bonds (Default value = None).

compute

Calculates the RDF for the specified points. Will overwrite the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{ref\_points}\), 3) numpy.ndarray) – Reference points used to calculate the RDF.

  • points ((\(N_{points}\), 3) numpy.ndarray, optional) – Points used to calculate the RDF. Uses ref_points if not provided or None.

  • nlist (freud.locality.NeighborList) – NeighborList to use to find bonds (Default value = None).

plot

Plot radial distribution function.

Parameters

ax (matplotlib.axes.Axes) – Axis to plot on. If None, make a new figure and axis. (Default value = None)

Returns

Axis with the plot.

Return type

(matplotlib.axes.Axes)

reset

Resets the values of RDF in memory.

Environment Module

Overview

freud.environment.BondOrder

Compute the bond orientational order diagram for the system of particles.

freud.environment.LocalDescriptors

Compute a set of descriptors (a numerical “fingerprint”) of a particle’s local environment.

freud.environment.MatchEnv

Clusters particles according to whether their local environments match or not, according to various shape matching metrics.

freud.environment.AngularSeparation

Calculates the minimum angles of separation between particles and references.

freud.environment.LocalBondProjection

Calculates the maximal projection of nearest neighbor bonds for each particle onto some set of reference vectors, defined in the particles’ local reference frame.

Details

The freud.environment module contains functions which characterize the local environments of particles in the system. These methods use the positions and orientations of particles in the local neighborhood of a given particle to characterize the particle environment.

Bond Order
class freud.environment.BondOrder(rmax, k, n, nBinsT, nBinsP)

Compute the bond orientational order diagram for the system of particles.

The bond orientational order diagram (BOOD) is a way of studying the average local environments experienced by particles. In a BOOD, a particle and its nearest neighbors (determined by either a prespecified number of neighbors or simply a cutoff distance) are treated as connected by a bond joining their centers. All of the bonds in the system are then binned by their azimuthal (\(\theta\)) and polar (\(\phi\)) angles to indicate the location of a particle’s neighbors relative to itself. The distance between the particle and its neighbor is only important when determining whether it is counted as a neighbor, but is not part of the BOOD; as such, the BOOD can be viewed as a projection of all bonds onto the unit sphere. The resulting 2D histogram provides insight into how particles are situated relative to one-another in a system.

This class provides access to the classical BOOD as well as a few useful variants. These variants can be accessed via the mode arguments to the compute() or accumulate() methods. Available modes of calculation are:

  • 'bod' (Bond Order Diagram, default): This mode constructs the default BOOD, which is the 2D histogram containing the number of bonds formed through each azimuthal \(\left( \theta \right)\) and polar \(\left( \phi \right)\) angle.

  • 'lbod' (Local Bond Order Diagram): In this mode, a particle’s neighbors are rotated into the local frame of the particle before the BOOD is calculated, i.e. the directions of bonds are determined relative to the orientation of the particle rather than relative to the global reference frame. An example of when this mode would be useful is when a system is composed of multiple grains of the same crystal; the normal BOOD would show twice as many peaks as expected, but using this mode, the bonds would be superimposed.

  • 'obcd' (Orientation Bond Correlation Diagram): This mode aims to quantify the degree of orientational as well as translational ordering. As a first step, the rotation that would align a particle’s neighbor with the particle is calculated. Then, the neighbor is rotated around the central particle by that amount, which actually changes the direction of the bond. One example of how this mode could be useful is in identifying plastic crystals, which exhibit translational but not orientational ordering. Normally, the BOOD for a plastic crystal would exhibit clear structure since there is translational order, but with this mode, the neighbor positions would actually be modified, resulting in an isotropic (disordered) BOOD.

  • 'oocd' (Orientation Orientation Correlation Diagram): This mode is substantially different from the other modes. Rather than compute the histogram of neighbor bonds, this mode instead computes a histogram of the directors of neighboring particles, where the director is defined as the basis vector \(\hat{z}\) rotated by the neighbor’s quaternion. The directors are then rotated into the central particle’s reference frame. This mode provides insight into the local orientational environment of particles, indicating, on average, how a particle’s neighbors are oriented.

Module author: Erin Teich <erteich@umich.edu>

Parameters
  • rmax (float) – Distance over which to calculate.

  • k (unsigned int) – Order parameter i. To be removed.

  • n (unsigned int) – Number of neighbors to find.

  • n_bins_t (unsigned int) – Number of \(\theta\) bins.

  • n_bins_p (unsigned int) – Number of \(\phi\) bins.

Variables
  • bond_order (\(\left(N_{\phi}, N_{\theta} \right)\) numpy.ndarray) – Bond order.

  • box (freud.box.Box) – Box used in the calculation.

  • theta (\(\left(N_{\theta} \right)\) numpy.ndarray) – The values of bin centers for \(\theta\).

  • phi (\(\left(N_{\phi} \right)\) numpy.ndarray) – The values of bin centers for \(\phi\).

  • n_bins_theta (unsigned int) – The number of bins in the \(\theta\) dimension.

  • n_bins_phi (unsigned int) – The number of bins in the \(\phi\) dimension.

accumulate

Calculates the correlation function and adds to the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{ref\_points}\), 3) numpy.ndarray) – Reference points used to calculate bonds.

  • ref_orientations ((\(N_{ref\_points}\), 4) numpy.ndarray) – Reference orientations used to calculate bonds.

  • points ((\(N_{points}\), 3) numpy.ndarray, optional) – Points used to calculate bonds. Uses ref_points if not provided or None.

  • orientations ((\(N_{points}\), 4) numpy.ndarray) – Orientations used to calculate bonds. Uses ref_orientations if not provided or None.

  • mode (str, optional) – Mode to calculate bond order. Options are 'bod', 'lbod', 'obcd', or 'oocd' (Default value = 'bod').

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find bonds (Default value = None).

compute

Calculates the bond order histogram. Will overwrite the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference points used to calculate bonds.

  • ref_orientations ((\(N_{particles}\), 4) numpy.ndarray) – Reference orientations used to calculate bonds.

  • points ((\(N_{particles}\), 3) numpy.ndarray, optional) – Points used to calculate bonds. Uses ref_points if not provided or None.

  • orientations ((\(N_{particles}\), 4) numpy.ndarray, optional) – Orientations used to calculate bonds. Uses ref_orientations if not provided or None.

  • mode (str, optional) – Mode to calculate bond order. Options are 'bod', 'lbod', 'obcd', or 'oocd' (Default value = 'bod').

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find bonds (Default value = None).

reset

Resets the values of the bond order in memory.

Local Descriptors
class freud.environment.LocalDescriptors(num_neighbors, lmax, rmax, negative_m=True)

Compute a set of descriptors (a numerical “fingerprint”) of a particle’s local environment.

The resulting spherical harmonic array will be a complex-valued array of shape (num_bonds, num_sphs). Spherical harmonic calculation can be restricted to some number of nearest neighbors through the num_neighbors argument; if a particle has more bonds than this number, the last one or more rows of bond spherical harmonics for each particle will not be set.

Module author: Matthew Spellings <mspells@umich.edu>

Parameters
  • num_neighbors (unsigned int) – Maximum number of neighbors to compute descriptors for.

  • lmax (unsigned int) – Maximum spherical harmonic \(l\) to consider.

  • rmax (float) – Initial guess of the maximum radius to looks for neighbors.

  • negative_m (bool) – True if we should also calculate \(Y_{lm}\) for negative \(m\).

Variables
  • sph (\(\left(N_{bonds}, \text{SphWidth} \right)\) numpy.ndarray) – A reference to the last computed spherical harmonic array.

  • num_particles (unsigned int) – The number of points passed to the last call to compute().

  • num_neighbors (unsigned int) – The number of neighbors used by the last call to compute. Bounded from above by the number of reference points multiplied by the lower of the num_neighbors arguments passed to the last compute call or the constructor.

  • l_max (unsigned int) – The maximum spherical harmonic \(l\) to calculate for.

  • r_max (float) – The cutoff radius.

compute

Calculates the local descriptors of bonds from a set of source points to a set of destination points.

Parameters
  • box (freud.box.Box) – Simulation box.

  • num_neighbors (unsigned int) – Number of nearest neighbors to compute with or to limit to, if the neighbor list is precomputed.

  • points_ref ((\(N_{particles}\), 3) numpy.ndarray) – Source points to calculate the order parameter.

  • points ((\(N_{particles}\), 3) numpy.ndarray, optional) – Destination points to calculate the order parameter (Default value = None).

  • orientations ((\(N_{particles}\), 4) numpy.ndarray, optional) – Orientation of each reference point (Default value = None).

  • mode (str, optional) – Orientation mode to use for environments, either 'neighborhood' to use the orientation of the local neighborhood, 'particle_local' to use the given particle orientations, or 'global' to not rotate environments (Default value = 'neighborhood').

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find bonds (Default value = None).

Match Environments
class freud.environment.MatchEnv(box, rmax, k)

Clusters particles according to whether their local environments match or not, according to various shape matching metrics.

Module author: Erin Teich <erteich@umich.edu>

Parameters
  • box (freud.box.Box) – Simulation box.

  • rmax (float) – Cutoff radius for cell list and clustering algorithm. Values near the first minimum of the RDF are recommended.

  • k (unsigned int) – Number of nearest neighbors taken to define the local environment of any given particle.

Variables
  • tot_environment (\(\left(N_{particles}, N_{neighbors}, 3\right)\) numpy.ndarray) – All environments for all particles.

  • num_particles (unsigned int) – The number of particles.

  • num_clusters (unsigned int) – The number of clusters.

  • clusters (\(\left(N_{particles}\right)\) numpy.ndarray) – The per-particle index indicating cluster membership.

cluster

Determine clusters of particles with matching environments.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Destination points to calculate the order parameter.

  • threshold (float) – Maximum magnitude of the vector difference between two vectors, below which they are “matching.”

  • hard_r (bool) – If True, add all particles that fall within the threshold of m_rmaxsq to the environment.

  • registration (bool) – If True, first use brute force registration to orient one set of environment vectors with respect to the other set such that it minimizes the RMSD between the two sets.

  • global_search (bool) – If True, do an exhaustive search wherein the environments of every single pair of particles in the simulation are compared. If False, only compare the environments of neighboring particles.

  • env_nlist (freud.locality.NeighborList, optional) – NeighborList to use to find the environment of every particle (Default value = None).

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find neighbors of every particle, to compare environments (Default value = None).

getEnvironment

Returns the set of vectors defining the environment indexed by i.

Parameters

i (unsigned int) – Environment index.

Returns

The array of vectors.

Return type

\(\left(N_{neighbors}, 3\right)\) numpy.ndarray

isSimilar

Test if the motif provided by refPoints1 is similar to the motif provided by refPoints2.

Parameters
  • refPoints1 ((\(N_{particles}\), 3) numpy.ndarray) – Vectors that make up motif 1.

  • refPoints2 ((\(N_{particles}\), 3) numpy.ndarray) – Vectors that make up motif 2.

  • threshold (float) – Maximum magnitude of the vector difference between two vectors, below which they are considered “matching.”

  • registration (bool, optional) – If True, first use brute force registration to orient one set of environment vectors with respect to the other set such that it minimizes the RMSD between the two sets (Default value = False).

Returns

A doublet that gives the rotated (or not) set of refPoints2, and the mapping between the vectors of refPoints1 and refPoints2 that will make them correspond to each other. Empty if they do not correspond to each other.

Return type

tuple ((\(\left(N_{particles}, 3\right)\) numpy.ndarray), map[int, int])

matchMotif

Determine clusters of particles that match the motif provided by refPoints.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Particle positions.

  • refPoints ((\(N_{particles}\), 3) numpy.ndarray) – Vectors that make up the motif against which we are matching.

  • threshold (float) – Maximum magnitude of the vector difference between two vectors, below which they are considered “matching.”

  • registration (bool, optional) – If True, first use brute force registration to orient one set of environment vectors with respect to the other set such that it minimizes the RMSD between the two sets (Default value = False).

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find bonds (Default value = None).

minRMSDMotif

Rotate (if registration=True) and permute the environments of all particles to minimize their RMSD with respect to the motif provided by refPoints.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Particle positions.

  • refPoints ((\(N_{particles}\), 3) numpy.ndarray) – Vectors that make up the motif against which we are matching.

  • registration (bool, optional) – If True, first use brute force registration to orient one set of environment vectors with respect to the other set such that it minimizes the RMSD between the two sets (Default value = False).

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find bonds (Default value = None).

Returns

Vector of minimal RMSD values, one value per particle.

Return type

\(\left(N_{particles}\right)\) numpy.ndarray

minimizeRMSD

Get the somewhat-optimal RMSD between the set of vectors refPoints1 and the set of vectors refPoints2.

Parameters
  • refPoints1 ((\(N_{particles}\), 3) numpy.ndarray) – Vectors that make up motif 1.

  • refPoints2 ((\(N_{particles}\), 3) numpy.ndarray) – Vectors that make up motif 2.

  • registration (bool, optional) – If true, first use brute force registration to orient one set of environment vectors with respect to the other set such that it minimizes the RMSD between the two sets (Default value = False).

Returns

A triplet that gives the associated min_rmsd, rotated (or not) set of refPoints2, and the mapping between the vectors of refPoints1 and refPoints2 that somewhat minimizes the RMSD.

Return type

tuple (float, (\(\left(N_{particles}, 3\right)\) numpy.ndarray), map[int, int])

plot

Plot cluster distribution.

Parameters

ax (matplotlib.axes.Axes) – Axis to plot on. If None, make a new figure and axis. (Default value = None)

Returns

Axis with the plot.

Return type

(matplotlib.axes.Axes)

setBox

Reset the simulation box.

Parameters

box (freud.box.Box) – Simulation box.

Angular Separation
class freud.environment.AngularSeparation(rmax, n)

Calculates the minimum angles of separation between particles and references.

Module author: Erin Teich <erteich@umich.edu>

Module author: Andrew Karas <askaras@umich.edu>

Parameters
  • rmax (float) – Cutoff radius for cell list and clustering algorithm. Values near the first minimum of the RDF are recommended.

  • n (int) – The number of neighbors.

Variables
  • nlist (freud.locality.NeighborList) – The neighbor list.

  • n_p (unsigned int) – The number of particles used in computing the last set.

  • n_ref (unsigned int) – The number of reference particles used in computing the neighbor angles.

  • n_global (unsigned int) – The number of global orientations to check against.

  • neighbor_angles (\(\left(N_{bonds}\right)\) numpy.ndarray) – The neighbor angles in radians. This field is only populated after computeNeighbor() is called. The angles are stored in the order of the neighborlist object.

  • global_angles (\(\left(N_{particles}, N_{global} \right)\) numpy.ndarray) – The global angles in radians. This field is only populated after computeGlobal() is called. The angles are stored in the order of the neighborlist object.

computeGlobal

Calculates the minimum angles of separation between global_ors and ors, checking for underlying symmetry as encoded in equiv_quats. The result is stored in the global_angles class attribute.

Parameters
  • global_ors ((\(N_{particles}\), 4) numpy.ndarray) – Reference orientations to calculate the order parameter.

  • ors ((\(N_{particles}\), 4) numpy.ndarray) – Orientations to calculate the order parameter.

  • equiv_quats ((\(N_{particles}\), 4) numpy.ndarray) – The set of all equivalent quaternions that takes the particle as it is defined to some global reference orientation. Important: equiv_quats must include both \(q\) and \(-q\), for all included quaternions.

computeNeighbor

Calculates the minimum angles of separation between ref_ors and ors, checking for underlying symmetry as encoded in equiv_quats. The result is stored in the neighbor_angles class attribute.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_ors ((\(N_{particles}\), 4) numpy.ndarray) – Reference orientations used to calculate the order parameter.

  • ors ((\(N_{particles}\), 4) numpy.ndarray) – Orientations used to calculate the order parameter.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference points used to calculate the order parameter.

  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points used to calculate the order parameter.

  • equiv_quats ((\(N_{particles}\), 4) numpy.ndarray, optional) – The set of all equivalent quaternions that takes the particle as it is defined to some global reference orientation. Important: equiv_quats must include both \(q\) and \(-q\), for all included quaternions.

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find bonds (Default value = None).

Local Bond Projection
class freud.environment.LocalBondProjection(rmax, num_neighbors)

Calculates the maximal projection of nearest neighbor bonds for each particle onto some set of reference vectors, defined in the particles’ local reference frame.

Module author: Erin Teich <erteich@umich.edu>

New in version 0.11.

Parameters
  • rmax (float) – Cutoff radius.

  • num_neighbors (unsigned int) – The number of neighbors.

Variables
  • projections ((\(\left(N_{reference}, N_{neighbors}, N_{projection\_vecs} \right)\) numpy.ndarray) – The projection of each bond between reference particles and their neighbors onto each of the projection vectors.

  • normed_projections ((\(\left(N_{reference}, N_{neighbors}, N_{projection\_vecs} \right)\) numpy.ndarray) – The normalized projection of each bond between reference particles and their neighbors onto each of the projection vectors.

  • num_reference_particles (int) – The number of reference points used in the last calculation.

  • num_particles (int) – The number of points used in the last calculation.

  • num_proj_vectors (int) – The number of projection vectors used in the last calculation.

  • box (freud.box.Box) – The box used in the last calculation.

  • nlist (freud.locality.NeighborList) – The neighbor list generated in the last calculation.

compute

Calculates the maximal projections of nearest neighbor bonds (between ref_points and points) onto the set of reference vectors proj_vecs, defined in the local reference frames of the ref_points as defined by the orientations ref_ors. This computation accounts for the underlying symmetries of the reference frame as encoded in equiv_quats.

Parameters
  • box (freud.box.Box) – Simulation box.

  • proj_vecs ((\(N_{vectors}\), 3) numpy.ndarray) – The set of reference vectors, defined in the reference particles’ reference frame, to calculate maximal local bond projections onto.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference points used in the calculation.

  • ref_ors ((\(N_{particles}\), 4) numpy.ndarray) – Reference orientations used in the calculation.

  • points ((\(N_{particles}\), 3) numpy.ndarray, optional) – Points (neighbors of ref_points) used in the calculation. Uses ref_points if not provided or None.

  • equiv_quats ((\(N_{quats}\), 4) numpy.ndarray, optional) – The set of all equivalent quaternions that takes the particle as it is defined to some global reference orientation. Note that this does not need to include both \(q\) and \(-q\), since \(q\) and \(-q\) effect the same rotation on vectors. Defaults to an identity quaternion.

  • nlist (freud.locality.NeighborList, optional) – NeighborList to use to find bonds (Default value = None).

Index Module

Overview

freud.index.Index2D

freud-style indexer for flat arrays.

freud.index.Index3D

freud-style indexer for flat arrays.

Details

The freud.index module exposes the \(1\)-dimensional indexer utilized in freud at the C++ level. At the C++ level, freud utilizes flat arrays to represent multidimensional arrays. \(N\)-dimensional arrays with \(n_i\) elements in each dimension \(i\) are represented as \(1\)-dimensional arrays with \(\prod_{i=1}^N n_i\) elements.

Index2D
class freud.index.Index2D(*args)

freud-style indexer for flat arrays.

Once constructed, the object provides direct access to the flat index equivalent:

  • Constructor Calls:

    Initialize with all dimensions identical:

    freud.index.Index2D(w)
    

    Initialize with each dimension specified:

    freud.index.Index2D(w, h)
    

Note

freud indexes column-first i.e. Index2D(i, j) will return the \(1\)-dimensional index of the \(i^{th}\) column and the \(j^{th}\) row. This is the opposite of what occurs in a numpy array, in which array[i, j] returns the element in the \(i^{th}\) row and the \(j^{th}\) column.

Module author: Joshua Anderson <joaander@umich.edu>

Parameters
  • w (unsigned int) – Width of 2D array (number of columns).

  • h (unsigned int) – Height of 2D array (number of rows).

Variables

num_elements (unsigned int) – Number of elements in the array.

Example:

index = Index2D(10)
i = index(3, 5)
__call__(self, i, j)
Parameters
  • i (unsigned int) – Column index.

  • j (unsigned int) – Row index.

Returns

Index in flat (e.g. \(1\)-dimensional) array.

Return type

unsigned int

Index3D
class freud.index.Index3D(*args)

freud-style indexer for flat arrays.

Once constructed, the object provides direct access to the flat index equivalent:

  • Constructor Calls:

    Initialize with all dimensions identical:

    freud.index.Index3D(w)
    

    Initialize with each dimension specified:

    freud.index.Index3D(w, h, d)
    

Note

freud indexes column-first i.e. Index3D(i, j, k) will return the \(1\)-dimensional index of the \(i^{th}\) column, \(j^{th}\) row, and the \(k^{th}\) frame. This is the opposite of what occurs in a numpy array, in which array[i, j, k] returns the element in the \(i^{th}\) frame, \(j^{th}\) row, and the \(k^{th}\) column.

Module author: Joshua Anderson <joaander@umich.edu>

Parameters
  • w (unsigned int) – Width of 2D array (number of columns).

  • h (unsigned int) – Height of 2D array (number of rows).

  • d (unsigned int) – Depth of 2D array (number of frames).

Variables

num_elements (unsigned int) – Number of elements in the array.

Example:

index = Index3D(10)
i = index(3, 5, 4)
__call__(self, i, j, k)
Parameters
  • i (unsigned int) – Column index.

  • j (unsigned int) – Row index.

  • k (unsigned int) – Frame index.

Returns

Index in flat (e.g. \(1\)-dimensional) array.

Return type

unsigned int

Interface Module

Overview

freud.interface.InterfaceMeasure

Measures the interface between two sets of points.

Details

The freud.interface module contains functions to measure the interface between sets of points.

Interface Measure
class freud.interface.InterfaceMeasure(box, r_cut)

Measures the interface between two sets of points.

Module author: Matthew Spellings <mspells@umich.edu>

Module author: Bradley Dice <bdice@bradleydice.com>

Parameters
  • box (freud.box.Box) – Simulation box.

  • r_cut (float) – Distance to search for particle neighbors.

Variables
  • ref_point_count (int) – Number of particles from ref_points on the interface.

  • ref_point_ids (np.ndarray) – The particle IDs from ref_points.

  • point_count (int) – Number of particles from points on the interface.

  • point_ids (np.ndarray) – The particle IDs from points.

compute

Compute the particles at the interface between the two given sets of points.

Parameters
  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – One set of particle positions.

  • points ((\(N_{particles}\), 3) numpy.ndarray) – Other set of particle positions.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

property point_count
property point_ids
property ref_point_count
property ref_point_ids

Locality Module

Overview

freud.locality.NeighborQuery

Class representing a set of points along with the ability to query for neighbors of these points.

freud.locality.NeighborQueryResult

Class encapsulating the output of queries of NeighborQuery objects.

freud.locality.NeighborList

Class representing a certain number of “bonds” between particles.

freud.locality.IteratorLinkCell

Iterates over the particles in a cell.

freud.locality.LinkCell

Supports efficiently finding all points in a set within a certain distance from a given point.

freud.locality.NearestNeighbors

Supports efficiently finding the \(N\) nearest neighbors of each point in a set for some fixed integer \(N\).

freud.locality.AABBQuery

Use an AABB tree to find neighbors.

Details

The freud.locality module contains data structures to efficiently locate points based on their proximity to other points.

Neighbor List
class freud.locality.NeighborList

Class representing a certain number of “bonds” between particles. Computation methods will iterate over these bonds when searching for neighboring particles.

NeighborList objects are constructed for two sets of position arrays A (alternatively reference points; of length \(n_A\)) and B (alternatively target points; of length \(n_B\)) and hold a set of \(\left(i, j\right): i < n_A, j < n_B\) index pairs corresponding to near-neighbor points in A and B, respectively.

For efficiency, all bonds for a particular reference particle \(i\) are contiguous and bonds are stored in order based on reference particle index \(i\). The first bond index corresponding to a given particle can be found in \(\log(n_{bonds})\) time using find_first_index().

Module author: Matthew Spellings <mspells@umich.edu>

New in version 0.6.4.

Note

Typically, there is no need to instantiate this class directly. In most cases, users should manipulate freud.locality.NeighborList objects received from a neighbor search algorithm, such as freud.locality.LinkCell, freud.locality.NearestNeighbors, or freud.voronoi.Voronoi.

Variables
  • index_i (np.ndarray) – The reference point indices from the last set of points this object was evaluated with. This array is read-only to prevent breakage of find_first_index().

  • index_j (np.ndarray) – The reference point indices from the last set of points this object was evaluated with. This array is read-only to prevent breakage of find_first_index().

  • weights ((\(N_{bonds}\)) np.ndarray) – The per-bond weights from the last set of points this object was evaluated with.

  • segments ((\(N_{ref\_points}\)) np.ndarray) – A segment array, which is an array of length \(N_{ref}\) indicating the first bond index for each reference particle from the last set of points this object was evaluated with.

  • neighbor_counts ((\(N_{ref\_points}\)) np.ndarray) – A neighbor count array, which is an array of length \(N_{ref}\) indicating the number of neighbors for each reference particle from the last set of points this object was evaluated with.

Example:

# Assume we have position as Nx3 array
lc = LinkCell(box, 1.5).compute(box, positions)
nlist = lc.nlist

# Get all vectors from central particles to their neighbors
rijs = positions[nlist.index_j] - positions[nlist.index_i]
box.wrap(rijs)

The NeighborList can be indexed to access bond particle indices. Example:

for i, j in lc.nlist[:]:
    print(i, j)
copy

Create a copy. If other is given, copy its contents into this object. Otherwise, return a copy of this object.

Parameters

other (freud.locality.NeighborList, optional) – A NeighborList to copy into this object (Default value = None).

filter

Removes bonds that satisfy a boolean criterion.

Parameters

filt (np.ndarray) – Boolean-like array of bonds to keep (True means the bond will not be removed).

Note

This method modifies this object in-place.

Example:

# Keep only the bonds between particles of type A and type B
nlist.filter(types[nlist.index_i] != types[nlist.index_j])
filter_r

Removes bonds that are outside of a given radius range.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference points to use for filtering.

  • points ((\(N_{particles}\), 3) numpy.ndarray) – Target points to use for filtering.

  • rmax (float) – Maximum bond distance in the resulting neighbor list.

  • rmin (float, optional) – Minimum bond distance in the resulting neighbor list (Default value = 0).

find_first_index

Returns the lowest bond index corresponding to a reference particle with an index \(\geq i\).

Parameters

i (unsigned int) – The particle index.

classmethod from_arrays(type cls, Nref, Ntarget, index_i, index_j, weights=None)

Create a NeighborList from a set of bond information arrays.

Parameters
  • Nref (int) – Number of reference points (corresponding to index_i).

  • Ntarget (int) – Number of target points (corresponding to index_j).

  • index_i (np.ndarray) – Array of integers corresponding to indices in the set of reference points.

  • index_j (np.ndarray) – Array of integers corresponding to indices in the set of target points.

  • weights (np.ndarray, optional) – Array of per-bond weights (if None is given, use a value of 1 for each weight) (Default value = None).

Neighbor Querying
class freud.locality.NeighborQuery

Class representing a set of points along with the ability to query for neighbors of these points.

Warning

This class should not be instantiated directly. The subclasses AABBQuery and LinkCell provide the intended interfaces.

The NeighborQuery class represents the abstract interface for neighbor finding. The class contains a set of points and a simulation box, the latter of which is used to define the system and the periodic boundary conditions required for finding neighbors of these points. The primary mode of interacting with the NeighborQuery is through the query() and queryBall() functions, which enable finding either the nearest neighbors of a point or all points within a distance cutoff, respectively. Subclasses of NeighborQuery implement these methods based on the nature of the underlying data structure.

Module author: Vyas Ramasubramani <vramasub@umich.edu>

New in version 1.1.0.

Parameters
Variables
  • box (freud.box.Box) – The box object used by this data structure.

  • points (np.ndarray) – The array of points in this data structure.

query

Query for nearest neighbors of the provided point.

Parameters
  • points ((\(N\), 3) numpy.ndarray) – Points to query for.

  • k (int) – The number of nearest neighbors to find.

  • exclude_ii (bool, optional) – Set this to True if pairs of points with identical indices to those in self.points should be excluded. If this is None, it will be treated as True if points is None or the same object as ref_points (Defaults to None).

Returns

Results object containing the output of this query.

Return type

NeighborQueryResult

queryBall

Query for all points within a distance r of the provided point(s).

Parameters
  • points ((\(N\), 3) numpy.ndarray) – Points to query for.

  • r (float) – The distance within which to find neighbors

  • exclude_ii (bool, optional) – Set this to True if pairs of points with identical indices to those in self.points should be excluded. If this is None, it will be treated as True if points is None or the same object as ref_points (Defaults to None).

Returns

Results object containing the output of this query.

Return type

NeighborQueryResult

class freud.locality.NeighborQueryResult

Class encapsulating the output of queries of NeighborQuery objects.

Warning

This class should not be instantiated directly, it is the return value of all query* functions of NeighborQuery. The class provides a convenient interface for iterating over query results, and can be transparently converted into a list or a NeighborList object.

The NeighborQueryResult makes it easy to work with the results of queries and convert them to various natural objects. Additionally, the result is a generator, making it easy for users to lazily iterate over the object.

Module author: Vyas Ramasubramani <vramasub@umich.edu>

New in version 1.1.0.

toNList

Convert query result to a freud NeighborList.

Returns

A freud NeighborList containing all neighbor pairs found by the query generating this result object.

Return type

NeighborList

class freud.locality.AABBQuery

Use an AABB tree to find neighbors.

Module author: Bradley Dice <bdice@bradleydice.com>

Module author: Vyas Ramasubramani <vramasub@umich.edu>

New in version 1.1.0.

Variables
  • box (freud.locality.Box) – The simulation box.

  • points (np.ndarray) – The points associated with this class.

query

Query for nearest neighbors of the provided point.

This method has a slightly different signature from the parent method to support querying based on a specified guessed rcut and scaling.

Parameters
  • box (freud.box.Box) – Simulation box.

  • points ((\(N\), 3) numpy.ndarray) – Points to query for.

  • k (int) – The number of nearest neighbors to find.

  • r (float) – The initial guess of a distance to search to find N neighbors.

  • scale (float) – Multiplier by which to increase r if not enough neighbors are found.

Returns

Results object containing the output of this query.

Return type

NeighborQueryResult

queryBall

Query for all points within a distance r of the provided point(s).

Parameters
  • points ((\(N\), 3) numpy.ndarray) – Points to query for.

  • r (float) – The distance within which to find neighbors

  • exclude_ii (bool, optional) – Set this to True if pairs of points with identical indices to those in self.points should be excluded. If this is None, it will be treated as True if points is None or the same object as ref_points (Defaults to None).

Returns

Results object containing the output of this query.

Return type

NeighborQueryResult

class freud.locality.LinkCell(box, cell_width)

Supports efficiently finding all points in a set within a certain distance from a given point.

Module author: Joshua Anderson <joaander@umich.edu>

Module author: Vyas Ramasubramani <vramasub@umich.edu>

Parameters
  • box (freud.box.Box) – Simulation box.

  • cell_width (float) – Maximum distance to find particles within.

  • points (np.ndarray, optional) – The points associated with this class, if used as a NeighborQuery object, i.e. built on one set of points that can then be queried against. (Defaults to None).

Variables
  • box (freud.box.Box) – Simulation box.

  • num_cells (unsigned int) – The number of cells in the box.

  • nlist (freud.locality.NeighborList) – The neighbor list stored by this object, generated by compute().

Note

2D: freud.locality.LinkCell properly handles 2D boxes. The points must be passed in as [x, y, 0]. Failing to set z=0 will lead to undefined behavior.

Example:

# Assume positions are an Nx3 array
lc = LinkCell(box, 1.5)
lc.compute(box, positions)
for i in range(positions.shape[0]):
    # Cell containing particle i
    cell = lc.getCell(positions[0])
    # List of cell's neighboring cells
    cellNeighbors = lc.getCellNeighbors(cell)
    # Iterate over neighboring cells (including our own)
    for neighborCell in cellNeighbors:
        # Iterate over particles in each neighboring cell
        for neighbor in lc.itercell(neighborCell):
            pass # Do something with neighbor index

# Using NeighborList API
dens = density.LocalDensity(1.5, 1, 1)
dens.compute(box, positions, nlist=lc.nlist)
compute

Update the data structure for the given set of points and compute a NeighborList.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference point coordinates.

  • points ((\(N_{particles}\), 3) numpy.ndarray, optional) – Point coordinates (Default value = None).

  • exclude_ii (bool, optional) – Set this to True if pairs of points with identical indices should be excluded. If this is None, it will be treated as True if points is None or the same object as ref_points (Defaults to None).

getCell

Returns the index of the cell containing the given point.

Parameters

point (\(\left(3\right)\) numpy.ndarray) – Point coordinates \(\left(x,y,z\right)\).

Returns

Cell index.

Return type

unsigned int

getCellNeighbors

Returns the neighboring cell indices of the given cell.

Parameters

cell (unsigned int) – Cell index.

Returns

Array of cell neighbors.

Return type

\(\left(N_{neighbors}\right)\) numpy.ndarray

itercell

Return an iterator over all particles in the given cell.

Parameters

cell (unsigned int) – Cell index.

Returns

Iterator to particle indices in specified cell.

Return type

iter

query

Query for nearest neighbors of the provided point.

Parameters
  • points ((\(N\), 3) numpy.ndarray) – Points to query for.

  • k (int) – The number of nearest neighbors to find.

  • exclude_ii (bool, optional) – Set this to True if pairs of points with identical indices to those in self.points should be excluded. If this is None, it will be treated as True if points is None or the same object as ref_points (Defaults to None).

Returns

Results object containing the output of this query.

Return type

NeighborQueryResult

queryBall

Query for all points within a distance r of the provided point(s).

Parameters
  • points ((\(N\), 3) numpy.ndarray) – Points to query for.

  • r (float) – The distance within which to find neighbors

  • exclude_ii (bool, optional) – Set this to True if pairs of points with identical indices to those in self.points should be excluded. If this is None, it will be treated as True if points is None or the same object as ref_points (Defaults to None).

Returns

Results object containing the output of this query.

Return type

NeighborQueryResult

class freud.locality.IteratorLinkCell

Iterates over the particles in a cell.

Module author: Joshua Anderson <joaander@umich.edu>

Example:

# Grab particles in cell 0
for j in linkcell.itercell(0):
    print(positions[j])
next

Implements iterator interface

Nearest Neighbors
class freud.locality.NearestNeighbors(rmax, n_neigh, scale=1.1, strict_cut=False)

Supports efficiently finding the \(N\) nearest neighbors of each point in a set for some fixed integer \(N\).

  • strict_cut == True: rmax will be strictly obeyed, and any particle which has fewer than \(N\) neighbors will have values of UINT_MAX assigned.

  • strict_cut == False (default): rmax will be expanded to find the requested number of neighbors. If rmax increases to the point that a cell list cannot be constructed, a warning will be raised and the neighbors already found will be returned.

Module author: Eric Harper <harperic@umich.edu>

Parameters
  • rmax (float) – Initial guess of a distance to search within to find N neighbors.

  • n_neigh (unsigned int) – Number of neighbors to find for each point.

  • scale (float) – Multiplier by which to automatically increase rmax value if the requested number of neighbors is not found. Only utilized if strict_cut is False. Scale must be greater than 1.

  • strict_cut (bool) – Whether to use a strict rmax or allow for automatic expansion, default is False.

Variables
  • UINTMAX (unsigned int) – Value of C++ UINTMAX used to pad the arrays.

  • box (freud.box.Box) – Simulation box.

  • num_neighbors (unsigned int) – The number of neighbors this object will find.

  • n_ref (unsigned int) – The number of particles this object found neighbors of.

  • r_max (float) – Current nearest neighbors search radius guess.

  • wrapped_vectors (\(\left(N_{particles}\right)\) numpy.ndarray) – The wrapped vectors padded with -1 for empty neighbors.

  • r_sq_list (\(\left(N_{particles}, N_{neighbors}\right)\) numpy.ndarray) – The Rsq values list.

  • nlist (freud.locality.NeighborList) – The neighbor list stored by this object, generated by compute().

Example:

nn = NearestNeighbors(2, 6)
nn.compute(box, positions, positions)
hexatic = order.HexOrderParameter(2)
hexatic.compute(box, positions, nlist=nn.nlist)
compute

Update the data structure for the given set of points.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference point coordinates.

  • points ((\(N_{particles}\), 3) numpy.ndarray, optional) – Point coordinates. Defaults to ref_points if not provided or None.

  • exclude_ii (bool, optional) – Set this to True if pairs of points with identical indices should be excluded. If this is None, it will be treated as True if points is None or the same object as ref_points (Defaults to None).

getNeighborList

Return the entire neighbor list.

Returns

Indices of up to \(N_{neighbors}\) points that are neighbors of the \(N_{particles}\) reference points, padded with UINTMAX if fewer neighbors than requested were found.

Return type

\(\left(N_{particles}, N_{neighbors}\right)\) numpy.ndarray

getNeighbors

Return the \(N\) nearest neighbors of the reference point with index \(i\).

Parameters

i (unsigned int) – Index of the reference point whose neighbors will be returned.

Returns

Indices of points that are neighbors of reference point \(i\), padded with UINTMAX if fewer neighbors than requested were found.

Return type

\(\left(N_{neighbors}\right)\) numpy.ndarray

getRsq

Return the squared distances to the \(N\) nearest neighbors of the reference point with index \(i\).

Parameters

i (unsigned int) – Index of the reference point of which to fetch the neighboring point distances.

Returns

Squared distances to the \(N\) nearest neighbors.

Return type

\(\left(N_{particles}\right)\) numpy.ndarray

MSD Module

Overview

freud.msd.MSD

Compute the mean squared displacement.

Details

The freud.msd module provides functions for computing the mean-squared-displacement (MSD) of particles in periodic systems.

MSD
class freud.msd.MSD(box=None, mode=None)

Compute the mean squared displacement.

The mean squared displacement (MSD) measures how much particles move over time. The MSD plays an important role in characterizing Brownian motion, since it provides a measure of whether particles are moving according to diffusion alone or if there are other forces contributing. There are a number of definitions for the mean squared displacement. This function provides access to the two most common definitions through the mode argument.

  • 'window' (default): This mode calculates the most common form of the MSD, which is defined as

    \[MSD(m) = \frac{1}{N_{particles}} \sum_{i=1}^{N_{particles}} \frac{1}{N-m} \sum_{k=0}^{N-m-1} (\vec{r}_i(k+m) - \vec{r}_i(k))^2\]

    where \(r_i(t)\) is the position of particle \(i\) in frame \(t\). According to this definition, the mean squared displacement is the average displacement over all windows of length \(m\) over the course of the simulation. Therefore, for any \(m\), \(MSD(m)\) is averaged over all windows of length \(m\) and over all particles. This calculation can be accessed using the ‘window’ mode of this function.

    The windowed calculation can be quite computationally intensive. To perform this calculation efficiently, we use the algorithm described in [Calandrini2011] as described in this StackOverflow thread.

    Note

    The most intensive part of this calculation is computing an FFT. To maximize performance, freud attempts to use the fastest FFT library available. By default, the order of preference is pyFFTW, SciPy, and then NumPy. If you are experiencing significant slowdowns in calculating the MSD, you may benefit from installing a faster FFT library, which freud will automatically detect. The performance change will be especially noticeable if the length of your trajectory is a number whose prime factorization consists of extremely large prime factors. The standard Cooley-Tukey FFT algorithm performs very poorly in this case, so installing pyFFTW will significantly improve performance.

    Note that while pyFFTW is released under the BSD 3-Clause license, the FFTW library is available under either GPL or a commercial license. As a result, if you wish to use this module with pyFFTW in code, your code must also be GPL licensed unless you purchase a commercial license.

  • 'direct': Under some circumstances, however, we may be more interested in calculating a different quantity described by

    \begin{eqnarray*} MSD(t) =& \dfrac{1}{N_{particles}} \sum_{i=1}^{N_{particles}} (r_i(t) - r_i(0))^2 \\ \end{eqnarray*}

    In this case, at each time point (i.e. simulation frame) we simply compute how much particles have moved from their initial position, averaged over all particles. For more information on this calculation, see the Wikipedia page.

Note

The MSD is only well-defined when the box is constant over the course of the simulation. Additionally, the number of particles must be constant over the course of the simulation.

Module author: Vyas Ramasubramani <vramasub@umich.edu>

New in version 1.0.

Parameters
  • box (freud.box.Box, optional) – If not provided, the class will assume that all positions provided in calls to compute() or accumulate() are already unwrapped.

  • mode (str, optional) – Mode of calculation. Options are 'window' and 'direct'. (Default value = 'window').

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • msd (\(\left(N_{frames}, \right)\) numpy.ndarray) – The mean squared displacement.

accumulate

Calculate the MSD for the positions provided and add to the existing per-particle data.

Note

Unlike most methods in freud, accumulation for the MSD is split over particles rather than frames of a simulation. The reason for this choice is that efficient computation of the MSD requires using the entire trajectory for a given particle. As a result, this accumulation is primarily useful when the trajectory is so large that computing an MSD on all particles at once is prohibitive.

Parameters
  • positions ((\(N_{frames}\), \(N_{particles}\), 3) numpy.ndarray) – The particle positions over a trajectory. If neither box nor images are provided, the positions are assumed to be unwrapped already.

  • images ((\(N_{frames}\), \(N_{particles}\), 3) numpy.ndarray, optional) – The particle images to unwrap with if provided. Must be provided along with a simulation box (in the constructor) if particle positions need to be unwrapped. If neither are provided, positions are assumed to be unwrapped already.

compute

Calculate the MSD for the positions provided.

Parameters
  • positions ((\(N_{frames}\), \(N_{particles}\), 3) numpy.ndarray) – The particle positions over a trajectory. If neither box nor images are provided, the positions are assumed to be unwrapped already.

  • images ((\(N_{frames}\), \(N_{particles}\), 3) numpy.ndarray, optional) – The particle images to unwrap with if provided. Must be provided along with a simulation box (in the constructor) if particle positions need to be unwrapped. If neither are provided, positions are assumed to be unwrapped already.

plot

Plot MSD.

Parameters

ax (matplotlib.axes.Axes) – Axis to plot on. If None, make a new figure and axis. (Default value = None)

Returns

Axis with the plot.

Return type

(matplotlib.axes.Axes)

reset

Clears the stored MSD values from previous calls to accumulate (or the last call to compute).

Order Module

Overview

freud.order.CubaticOrderParameter

Compute the cubatic order parameter [HajiAkbari2015] for a system of particles using simulated annealing instead of Newton-Raphson root finding.

freud.order.NematicOrderParameter

Compute the nematic order parameter for a system of particles.

freud.order.HexOrderParameter

Calculates the \(k\)-atic order parameter for each particle in the system.

freud.order.TransOrderParameter

Compute the translational order parameter for each particle.

freud.order.LocalQl

Compute the local Steinhardt [Steinhardt1983] rotationally invariant \(Q_l\) order parameter for a set of points.

freud.order.LocalQlNear

A variant of the LocalQl class that performs its average over nearest neighbor particles as determined by an instance of freud.locality.NeighborList.

freud.order.LocalWl

Compute the local Steinhardt [Steinhardt1983] rotationally invariant \(W_l\) order parameter for a set of points.

freud.order.LocalWlNear

A variant of the LocalWl class that performs its average over nearest neighbor particles as determined by an instance of freud.locality.NeighborList.

freud.order.SolLiq

Uses dot products of \(Q_{lm}\) between particles for clustering.

freud.order.SolLiqNear

A variant of the SolLiq class that performs its average over nearest neighbor particles as determined by an instance of freud.locality.NeighborList.

freud.order.RotationalAutocorrelation

Calculates a measure of total rotational autocorrelation based on hyperspherical harmonics as laid out in “Design rules for engineering colloidal plastic crystals of hard polyhedra - phase behavior and directional entropic forces” by Karas et al.

Details

The freud.order module contains functions which compute order parameters for the whole system or individual particles. Order parameters take bond order data and interpret it in some way to quantify the degree of order in a system using a scalar value. This is often done through computing spherical harmonics of the bond order diagram, which are the spherical analogue of Fourier Transforms.

Cubatic Order Parameter
class freud.order.CubaticOrderParameter(t_initial, t_final, scale, n_replicates, seed)

Compute the cubatic order parameter [HajiAkbari2015] for a system of particles using simulated annealing instead of Newton-Raphson root finding.

Module author: Eric Harper <harperic@umich.edu>

Parameters
  • t_initial (float) – Starting temperature.

  • t_final (float) – Final temperature.

  • scale (float) – Scaling factor to reduce temperature.

  • n_replicates (unsigned int) – Number of replicate simulated annealing runs.

  • seed (unsigned int) – Random seed to use in calculations. If None, system time is used.

Variables
  • t_initial (float) – The value of the initial temperature.

  • t_final (float) – The value of the final temperature.

  • scale (float) – The scale

  • cubatic_order_parameter (float) – The cubatic order parameter.

  • orientation (\(\left(4 \right)\) numpy.ndarray) – The quaternion of global orientation.

  • particle_order_parameter (numpy.ndarray) – Cubatic order parameter.

  • particle_tensor (\(\left(N_{particles}, 3, 3, 3, 3 \right)\) numpy.ndarray) – Rank 5 tensor corresponding to each individual particle orientation.

  • global_tensor (\(\left(3, 3, 3, 3 \right)\) numpy.ndarray) – Rank 4 tensor corresponding to global orientation.

  • cubatic_tensor (\(\left(3, 3, 3, 3 \right)\) numpy.ndarray) – Rank 4 cubatic tensor.

  • gen_r4_tensor (\(\left(3, 3, 3, 3 \right)\) numpy.ndarray) – Rank 4 tensor corresponding to each individual particle orientation.

compute

Calculates the per-particle and global order parameter.

Parameters

orientations ((\(N_{particles}\), 4) numpy.ndarray) – Orientations as angles to use in computation.

Nematic Order Parameter
class freud.order.NematicOrderParameter(u)

Compute the nematic order parameter for a system of particles.

Module author: Jens Glaser <jsglaser@umich.edu>

New in version 0.7.0.

Parameters

u (\(\left(3 \right)\) numpy.ndarray) – The nematic director of a single particle in the reference state (without any rotation applied).

Variables
  • nematic_order_parameter (float) – Nematic order parameter.

  • director (\(\left(3 \right)\) numpy.ndarray) – The average nematic director.

  • particle_tensor (\(\left(N_{particles}, 3, 3 \right)\) numpy.ndarray) – One 3x3 matrix per-particle corresponding to each individual particle orientation.

  • nematic_tensor (\(\left(3, 3 \right)\) numpy.ndarray) – 3x3 matrix corresponding to the average particle orientation.

compute

Calculates the per-particle and global order parameter.

Parameters

orientations (\(\left(N_{particles}, 4 \right)\) numpy.ndarray) – Orientations to calculate the order parameter.

Hexatic Order Parameter
class freud.order.HexOrderParameter(rmax, k, n)

Calculates the \(k\)-atic order parameter for each particle in the system.

The \(k\)-atic order parameter for a particle \(i\) and its \(n\) neighbors \(j\) is given by:

\(\psi_k \left( i \right) = \frac{1}{n} \sum_j^n e^{k i \phi_{ij}}\)

The parameter \(k\) governs the symmetry of the order parameter while the parameter \(n\) governs the number of neighbors of particle \(i\) to average over. \(\phi_{ij}\) is the angle between the vector \(r_{ij}\) and \(\left( 1,0 \right)\).

Note

2D: freud.order.HexOrderParameter properly handles 2D boxes. The points must be passed in as [x, y, 0]. Failing to set z=0 will lead to undefined behavior.

Module author: Eric Harper <harperic@umich.edu>

Parameters
  • rmax (float) – +/- r distance to search for neighbors.

  • k (unsigned int) – Symmetry of order parameter (\(k=6\) is hexatic).

  • n (unsigned int) – Number of neighbors (\(n=k\) if \(n\) not specified).

Variables
  • psi (\(\left(N_{particles} \right)\) numpy.ndarray) – Order parameter.

  • box (freud.box.Box) – Box used in the calculation.

  • num_particles (unsigned int) – Number of particles.

  • K (unsigned int) – Symmetry of the order parameter.

compute

Calculates the correlation function and adds to the current histogram.

Parameters
Translational Order Parameter
class freud.order.TransOrderParameter(rmax, k, n)

Compute the translational order parameter for each particle.

Module author: Wenbo Shen <shenwb@umich.edu>

Parameters
  • rmax (float) – +/- r distance to search for neighbors.

  • k (float) – Symmetry of order parameter (\(k=6\) is hexatic).

  • n (unsigned int) – Number of neighbors (\(n=k\) if \(n\) not specified).

Variables
  • d_r (\(\left(N_{particles}\right)\) numpy.ndarray) – Reference to the last computed translational order array.

  • box (freud.box.Box) – Box used in the calculation.

  • num_particles (unsigned int) – Number of particles.

  • K (float) – Normalization value (d_r is divided by K).

compute

Calculates the local descriptors.

Parameters
Steinhardt \(Q_l\) Order Parameter
class freud.order.LocalQl(box, rmax, l, rmin)

Compute the local Steinhardt [Steinhardt1983] rotationally invariant \(Q_l\) order parameter for a set of points.

Implements the local rotationally invariant \(Q_l\) order parameter described by Steinhardt. For a particle i, we calculate the average \(Q_l\) by summing the spherical harmonics between particle \(i\) and its neighbors \(j\) in a local region: \(\overline{Q}_{lm}(i) = \frac{1}{N_b} \displaystyle\sum_{j=1}^{N_b} Y_{lm}(\theta(\vec{r}_{ij}), \phi(\vec{r}_{ij}))\). The particles included in the sum are determined by the rmax argument to the constructor.

This is then combined in a rotationally invariant fashion to remove local orientational order as follows: \(Q_l(i)=\sqrt{\frac{4\pi}{2l+1} \displaystyle\sum_{m=-l}^{l} |\overline{Q}_{lm}|^2 }\).

The computeAve() method provides access to a variant of this parameter that performs a average over the first and second shell combined [Lechner2008]. To compute this parameter, we perform a second averaging over the first neighbor shell of the particle to implicitly include information about the second neighbor shell. This averaging is performed by replacing the value \(\overline{Q}_{lm}(i)\) in the original definition by the average value of \(\overline{Q}_{lm}(k)\) over all the \(k\) neighbors of particle \(i\) as well as itself.

The computeNorm() and computeAveNorm() methods provide normalized versions of compute() and computeAve(), where the normalization is performed by dividing by the average \(Q_{lm}\) values over all particles.

Module author: Xiyu Du <xiyudu@umich.edu>

Module author: Vyas Ramasubramani <vramasub@umich.edu>

Parameters
  • box (freud.box.Box) – Simulation box.

  • rmax (float) – Cutoff radius for the local order parameter. Values near the first minimum of the RDF are recommended.

  • l (unsigned int) – Spherical harmonic quantum number l. Must be a positive integer.

  • rmin (float) – Lower bound for computing the local order parameter. Allows looking at, for instance, only the second shell, or some other arbitrary RDF region (Default value = 0).

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • num_particles (unsigned int) – Number of particles.

  • Ql (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(Q_l\) for each particle (filled with NaN for particles with no neighbors).

  • ave_Ql (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(\bar{Q_l}\) for each particle (filled with NaN for particles with no neighbors).

  • norm_Ql (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(Q_l\) for each particle normalized by the value over all particles (filled with NaN for particles with no neighbors).

  • ave_norm_Ql (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(\bar{Q_l}\) for each particle normalized by the value over all particles (filled with NaN for particles with no neighbors).

compute

Compute the order parameter.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeAve

Compute the order parameter over two nearest neighbor shells.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeAveNorm

Compute the order parameter over two nearest neighbor shells normalized by the average spherical harmonic value over all the particles.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeNorm

Compute the order parameter normalized by the average spherical harmonic value over all the particles.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

plot

Plot Ql.

Parameters
  • ax (matplotlib.axes.Axes) – Axis to plot on. If None, make a new figure and axis. (Default value = None)

  • mode (str) – Plotting mode. Must be one of “Ql”, “ave_Ql”, “norm_Ql” and “ave_norm_Ql”. Plots the given attribute. If None, plot the most recent computed attribute. (Default value = None)

Returns

Axis with the plot.

Return type

(matplotlib.axes.Axes)

setBox

Reset the simulation box.

Parameters

box (freud.box.Box) – Simulation box.

class freud.order.LocalQlNear(box, rmax, l, kn)

A variant of the LocalQl class that performs its average over nearest neighbor particles as determined by an instance of freud.locality.NeighborList. The number of included neighbors is determined by the kn parameter to the constructor.

Module author: Xiyu Du <xiyudu@umich.edu>

Module author: Vyas Ramasubramani <vramasub@umich.edu>

Parameters
  • box (freud.box.Box) – Simulation box.

  • rmax (float) – Cutoff radius for the local order parameter. Values near the first minimum of the RDF are recommended.

  • l (unsigned int) – Spherical harmonic quantum number l. Must be a positive number.

  • kn (unsigned int) – Number of nearest neighbors. must be a positive integer.

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • num_particles (unsigned int) – Number of particles.

  • Ql (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(Q_l\) for each particle (filled with NaN for particles with no neighbors).

  • ave_Ql (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(\bar{Q_l}\) for each particle (filled with NaN for particles with no neighbors).

  • norm_Ql (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(Q_l\) for each particle normalized by the value over all particles (filled with NaN for particles with no neighbors).

  • ave_norm_Ql (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(\bar{Q_l}\) for each particle normalized by the value over all particles (filled with NaN for particles with no neighbors).

compute

Compute the order parameter.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeAve

Compute the order parameter over two nearest neighbor shells.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeAveNorm

Compute the order parameter over two nearest neighbor shells normalized by the average spherical harmonic value over all the particles.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeNorm

Compute the order parameter normalized by the average spherical harmonic value over all the particles.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

plot

Plot Ql.

Parameters
  • ax (matplotlib.axes.Axes) – Axis to plot on. If None, make a new figure and axis. (Default value = None)

  • mode (str) – Plotting mode. Must be one of “Ql”, “ave_Ql”, “norm_Ql” and “ave_norm_Ql”. Plots the given attribute. If None, plot the most recent computed attribute. (Default value = None)

Returns

Axis with the plot.

Return type

(matplotlib.axes.Axes)

setBox

Reset the simulation box.

Parameters

box (freud.box.Box) – Simulation box.

Steinhardt \(W_l\) Order Parameter
class freud.order.LocalWl(box, rmax, l)

Compute the local Steinhardt [Steinhardt1983] rotationally invariant \(W_l\) order parameter for a set of points.

Implements the local rotationally invariant \(W_l\) order parameter described by Steinhardt. For a particle i, we calculate the average \(W_l\) by summing the spherical harmonics between particle \(i\) and its neighbors \(j\) in a local region: \(\overline{Q}_{lm}(i) = \frac{1}{N_b} \displaystyle\sum_{j=1}^{N_b} Y_{lm}(\theta(\vec{r}_{ij}), \phi(\vec{r}_{ij}))\). The particles included in the sum are determined by the rmax argument to the constructor.

The \(W_l\) is then defined as a weighted average over the \(\overline{Q}_{lm}(i)\) values using Wigner 3j symbols (Clebsch-Gordan coefficients). The resulting combination is rotationally (i.e. frame) invariant.

The computeAve() method provides access to a variant of this parameter that performs a average over the first and second shell combined [Lechner2008]. To compute this parameter, we perform a second averaging over the first neighbor shell of the particle to implicitly include information about the second neighbor shell. This averaging is performed by replacing the value \(\overline{Q}_{lm}(i)\) in the original definition by the average value of \(\overline{Q}_{lm}(k)\) over all the \(k\) neighbors of particle \(i\) as well as itself.

The computeNorm() and computeAveNorm() methods provide normalized versions of compute() and computeAve(), where the normalization is performed by dividing by the average \(Q_{lm}\) values over all particles.

Module author: Xiyu Du <xiyudu@umich.edu>

Module author: Vyas Ramasubramani <vramasub@umich.edu>

Parameters
  • box (freud.box.Box) – Simulation box.

  • rmax (float) – Cutoff radius for the local order parameter. Values near the first minimum of the RDF are recommended.

  • l (unsigned int) – Spherical harmonic quantum number l. Must be a positive integer.

  • rmin (float) – Lower bound for computing the local order parameter. Allows looking at, for instance, only the second shell, or some other arbitrary RDF region (Default value = 0).

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • num_particles (unsigned int) – Number of particles.

  • Wl (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(W_l\) for each particle (filled with NaN for particles with no neighbors).

  • ave_Wl (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(\bar{W}_l\) for each particle (filled with NaN for particles with no neighbors).

  • norm_Wl (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(W_l\) for each particle normalized by the value over all particles (filled with NaN for particles with no neighbors).

  • ave_norm_Wl (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(\bar{W}_l\) for each particle normalized by the value over all particles (filled with NaN for particles with no neighbors).

compute

Compute the order parameter.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeAve

Compute the order parameter over two nearest neighbor shells.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeAveNorm

Compute the order parameter over two nearest neighbor shells normalized by the average spherical harmonic value over all the particles.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeNorm

Compute the order parameter normalized by the average spherical harmonic value over all the particles.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

plot

Plot Wl.

Parameters
  • ax (matplotlib.axes.Axes) – Axis to plot on. If None, make a new figure and axis. (Default value = None)

  • mode (str) – Plotting mode. Must be one of “Wl”, “ave_Wl”, “norm_Wl” and “ave_norm_Wl”. Plots the given attribute. If None, plot the most recent computed attribute. (Default value = None)

Returns

Axis with the plot.

Return type

(matplotlib.axes.Axes)

setBox

Reset the simulation box.

Parameters

box (freud.box.Box) – Simulation box.

class freud.order.LocalWlNear(box, rmax, l, kn)

A variant of the LocalWl class that performs its average over nearest neighbor particles as determined by an instance of freud.locality.NeighborList. The number of included neighbors is determined by the kn parameter to the constructor.

Module author: Xiyu Du <xiyudu@umich.edu>

Module author: Vyas Ramasubramani <vramasub@umich.edu>

Parameters
  • box (freud.box.Box) – Simulation box.

  • rmax (float) – Cutoff radius for the local order parameter. Values near the first minimum of the RDF are recommended.

  • l (unsigned int) – Spherical harmonic quantum number l. Must be a positive number

  • kn (unsigned int) – Number of nearest neighbors. Must be a positive number.

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • num_particles (unsigned int) – Number of particles.

  • Wl (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(W_l\) for each particle (filled with NaN for particles with no neighbors).

  • ave_Wl (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(\bar{W}_l\) for each particle (filled with NaN for particles with no neighbors).

  • norm_Wl (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(W_l\) for each particle normalized by the value over all particles (filled with NaN for particles with no neighbors).

  • ave_norm_Wl (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(\bar{W}_l\) for each particle normalized by the value over all particles (filled with NaN for particles with no neighbors).

compute

Compute the order parameter.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeAve

Compute the order parameter over two nearest neighbor shells.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeAveNorm

Compute the order parameter over two nearest neighbor shells normalized by the average spherical harmonic value over all the particles.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeNorm

Compute the order parameter normalized by the average spherical harmonic value over all the particles.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

plot

Plot Wl.

Parameters
  • ax (matplotlib.axes.Axes) – Axis to plot on. If None, make a new figure and axis. (Default value = None)

  • mode (str) – Plotting mode. Must be one of “Wl”, “ave_Wl”, “norm_Wl” and “ave_norm_Wl”. Plots the given attribute. If None, plot the most recent computed attribute. (Default value = None)

Returns

Axis with the plot.

Return type

(matplotlib.axes.Axes)

setBox

Reset the simulation box.

Parameters

box (freud.box.Box) – Simulation box.

Solid-Liquid Order Parameter
class freud.order.SolLiq(box, rmax, Qthreshold, Sthreshold, l)

Uses dot products of \(Q_{lm}\) between particles for clustering.

Module author: Richmond Newman <newmanrs@umich.edu>

Parameters
  • box (freud.box.Box) – Simulation box.

  • rmax (float) – Cutoff radius for the local order parameter. Values near first minimum of the RDF are recommended.

  • Qthreshold (float) – Value of dot product threshold when evaluating \(Q_{lm}^*(i) Q_{lm}(j)\) to determine if a neighbor pair is a solid-like bond. (For \(l=6\), 0.7 generally good for FCC or BCC structures).

  • Sthreshold (unsigned int) – Minimum required number of adjacent solid-link bonds for a particle to be considered solid-like for clustering. (For \(l=6\), 6-8 is generally good for FCC or BCC structures).

  • l (unsigned int) – Choose spherical harmonic \(Q_l\). Must be positive and even.

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • largest_cluster_size (unsigned int) – The largest cluster size. Must call a compute method first.

  • cluster_sizes (unsigned int) – The sizes of all clusters.

  • largest_cluster_size – The largest cluster size. Must call a compute method first.

  • Ql_mi (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(Q_{lmi}\) for each particle.

  • clusters (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed set of solid-like cluster indices for each particle.

  • num_connections (\(\left(N_{particles}\right)\) numpy.ndarray) – The number of connections per particle.

  • Ql_dot_ij (\(\left(N_{particles}\right)\) numpy.ndarray) – Reference to the qldot_ij values.

  • num_particles (unsigned int) – Number of particles.

compute

Compute the solid-liquid order parameter.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeSolLiqNoNorm

Compute the solid-liquid order parameter without normalizing the dot product.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

computeSolLiqVariant

Compute a variant of the solid-liquid order parameter.

This variant method places a minimum threshold on the number of solid-like bonds a particle must have to be considered solid-like for clustering purposes.

Parameters
  • points ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate the order parameter.

  • nlist (freud.locality.NeighborList, optional) – Neighborlist to use to find bonds (Default value = None).

class freud.order.SolLiqNear(box, rmax, Qthreshold, Sthreshold, l, kn)

A variant of the SolLiq class that performs its average over nearest neighbor particles as determined by an instance of freud.locality.NeighborList. The number of included neighbors is determined by the kn parameter to the constructor.

Module author: Richmond Newman <newmanrs@umich.edu>

Parameters
  • box (freud.box.Box) – Simulation box.

  • rmax (float) – Cutoff radius for the local order parameter. Values near the first minimum of the RDF are recommended.

  • Qthreshold (float) – Value of dot product threshold when evaluating \(Q_{lm}^*(i) Q_{lm}(j)\) to determine if a neighbor pair is a solid-like bond. (For \(l=6\), 0.7 generally good for FCC or BCC structures).

  • Sthreshold (unsigned int) – Minimum required number of adjacent solid-link bonds for a particle to be considered solid-like for clustering. (For \(l=6\), 6-8 is generally good for FCC or BCC structures).

  • l (unsigned int) – Choose spherical harmonic \(Q_l\). Must be positive and even.

  • kn (unsigned int) – Number of nearest neighbors. Must be a positive number.

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • largest_cluster_size (unsigned int) – The largest cluster size. Must call a compute method first.

  • cluster_sizes (unsigned int) – The sizes of all clusters.

  • largest_cluster_size – The largest cluster size. Must call a compute method first.

  • Ql_mi (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed \(Q_{lmi}\) for each particle.

  • clusters (\(\left(N_{particles}\right)\) numpy.ndarray) – The last computed set of solid-like cluster indices for each particle.

  • num_connections (\(\left(N_{particles}\right)\) numpy.ndarray) – The number of connections per particle.

  • Ql_dot_ij (\(\left(N_{particles}\right)\) numpy.ndarray) – Reference to the qldot_ij values.

  • num_particles (unsigned int) – Number of particles.

compute

Compute the local rotationally invariant \(Q_l\) order parameter.

Parameters
computeSolLiqNoNorm

Compute the local rotationally invariant \(Q_l\) order parameter.

Parameters
computeSolLiqVariant

Compute the local rotationally invariant \(Q_l\) order parameter.

Parameters
Rotational Autocorrelation
class freud.order.RotationalAutocorrelation(l)

Calculates a measure of total rotational autocorrelation based on hyperspherical harmonics as laid out in “Design rules for engineering colloidal plastic crystals of hard polyhedra - phase behavior and directional entropic forces” by Karas et al. (currently in preparation). The output is not a correlation function, but rather a scalar value that measures total system orientational correlation with an initial state. As such, the output can be treated as an order parameter measuring degrees of rotational (de)correlation. For analysis of a trajectory, the compute call needs to be done at each trajectory frame.

Module author: Andrew Karas <askaras@umich.edu>

Module author: Vyas Ramasubramani <vramasub@umich.edu>

New in version 1.0.

Parameters

l (int) – Order of the hyperspherical harmonic. Must be a positive, even integer.

Variables
  • num_orientations (unsigned int) – The number of orientations used in computing the last set.

  • azimuthal (int) – The azimuthal quantum number, which defines the order of the hyperspherical harmonic. Must be a positive, even integer.

  • ra_array ((\(N_{orientations}\)) numpy.ndarray) – The per-orientation array of rotational autocorrelation values calculated by the last call to compute.

  • autocorrelation (float) – The autocorrelation computed in the last call to compute.

compute

Calculates the rotational autocorrelation function for a single frame.

Parameters
  • ref_ors ((\(N_{orientations}\), 4) numpy.ndarray) – Reference orientations for the initial frame.

  • ors ((\(N_{orientations}\), 4) numpy.ndarray) – Orientations for the frame of interest.

Parallel Module

Overview

freud.parallel.NumThreads

Context manager for managing the number of threads to use.

freud.parallel.getNumThreads

Get the number of threads for parallel computation.

freud.parallel.setNumThreads

Set the number of threads for parallel computation.

Details

The freud.parallel module controls the parallelization behavior of freud, determining how many threads the TBB-enabled parts of freud will use. freud uses all available threads for parallelization unless directed otherwise.

class freud.parallel.NumThreads

Context manager for managing the number of threads to use.

Module author: Joshua Anderson <joaander@umich.edu>

Parameters

N (int, optional) – Number of threads to use in this context. Defaults to None, which will use all available threads.

freud.parallel.getNumThreads

Get the number of threads for parallel computation.

Module author: Bradley Dice <bdice@bradleydice.com>

Returns

Number of threads.

Return type

int

freud.parallel.setNumThreads

Set the number of threads for parallel computation.

Module author: Joshua Anderson <joaander@umich.edu>

Parameters

nthreads (int, optional) – Number of threads to use. If None (default), use all threads available.

PMFT Module

Overview

freud.pmft.PMFTR12

Computes the PMFT [vanAndersKlotsa2014] [vanAndersAhmed2014] in a 2D system described by \(r\), \(\theta_1\), \(\theta_2\).

freud.pmft.PMFTXYT

Computes the PMFT [vanAndersKlotsa2014] [vanAndersAhmed2014] for systems described by coordinates \(x\), \(y\), \(\theta\) listed in the X, Y, and T arrays.

freud.pmft.PMFTXY2D

Computes the PMFT [vanAndersKlotsa2014] [vanAndersAhmed2014] in coordinates \(x\), \(y\) listed in the X and Y arrays.

freud.pmft.PMFTXYZ

Computes the PMFT [vanAndersKlotsa2014] [vanAndersAhmed2014] in coordinates \(x\), \(y\), \(z\), listed in the X, Y, and Z arrays.

Details

The freud.pmft module allows for the calculation of the Potential of Mean Force and Torque (PMFT) [vanAndersKlotsa2014] [vanAndersAhmed2014] in a number of different coordinate systems. The shape of the arrays computed by this module depend on the coordinate system used, with space discretized into a set of bins created by the PMFT object’s constructor. Each reference point’s neighboring points are assigned to bins, determined by the relative positions and/or orientations of the particles. Next, the positional correlation function (PCF) is computed by normalizing the binned histogram, by dividing out the number of accumulated frames, bin sizes (the Jacobian), and reference point number density. The PMFT is then defined as the negative logarithm of the PCF. For further descriptions of the numerical methods used to compute the PMFT, refer to the supplementary information of [vanAndersKlotsa2014].

Note

The coordinate system in which the calculation is performed is not the same as the coordinate system in which particle positions and orientations should be supplied. Only certain coordinate systems are available for certain particle positions and orientations:

  • 2D particle coordinates (position: [\(x\), \(y\), \(0\)], orientation: \(\theta\)):

    • \(r\), \(\theta_1\), \(\theta_2\).

    • \(x\), \(y\).

    • \(x\), \(y\), \(\theta\).

  • 3D particle coordinates:

    • \(x\), \(y\), \(z\).

Note

For any bins where the histogram is zero (i.e. no observations were made with that relative position/orientation of particles), the PCF will be zero and the PMFT will return nan.

PMFT \(\left(r, \theta_1, \theta_2\right)\)
class freud.pmft.PMFTR12(r_max, n_r, n_t1, n_t2)

Computes the PMFT [vanAndersKlotsa2014] [vanAndersAhmed2014] in a 2D system described by \(r\), \(\theta_1\), \(\theta_2\).

Note

2D: freud.pmft.PMFTR12 is only defined for 2D systems. The points must be passed in as [x, y, 0]. Failing to set z=0 will lead to undefined behavior.

Module author: Eric Harper <harperic@umich.edu>

Module author: Vyas Ramasubramani <vramasub@umich.edu>

Parameters
  • r_max (float) – Maximum distance at which to compute the PMFT.

  • n_r (unsigned int) – Number of bins in \(r\).

  • n_t1 (unsigned int) – Number of bins in \(\theta_1\).

  • n_t2 (unsigned int) – Number of bins in \(\theta_2\).

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • bin_counts (\(\left(N_{r}, N_{\theta2}, N_{\theta1}\right)\)) – Bin counts.

  • PCF (\(\left(N_{r}, N_{\theta2}, N_{\theta1}\right)\)) – The positional correlation function.

  • PMFT (\(\left(N_{r}, N_{\theta2}, N_{\theta1}\right)\)) – The potential of mean force and torque.

  • r_cut (float) – The cutoff used in the cell list.

  • R (\(\left(N_{r}\right)\) numpy.ndarray) – The array of \(r\)-values for the PCF histogram.

  • T1 (\(\left(N_{\theta1}\right)\) numpy.ndarray) – The array of \(\theta_1\)-values for the PCF histogram.

  • T2 (\(\left(N_{\theta2}\right)\) numpy.ndarray) – The array of \(\theta_2\)-values for the PCF histogram.

  • inverse_jacobian (\(\left(N_{r}, N_{\theta2}, N_{\theta1}\right)\)) – The inverse Jacobian used in the PMFT.

  • n_bins_R (unsigned int) – The number of bins in the \(r\)-dimension of the histogram.

  • n_bins_T1 (unsigned int) – The number of bins in the \(\theta_1\)-dimension of the histogram.

  • n_bins_T2 (unsigned int) – The number of bins in the \(\theta_2\)-dimension of the histogram.

accumulate

Calculates the positional correlation function and adds to the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference points used in computation.

  • ref_orientations ((\(N_{particles}\), 1) or (\(N_{particles}\),) numpy.ndarray) – Reference orientations as angles used in computation.

  • points ((\(N_{particles}\), 3) numpy.ndarray, optional) – Points used in computation. Uses ref_points if not provided or None.

  • orientations ((\(N_{particles}\), 1) or (\(N_{particles}\),) numpy.ndarray, optional) – Orientations as angles used in computation. Uses ref_orientations if not provided or None.

  • nlist (freud.locality.NeighborList, optional) – NeighborList used to find bonds (Default value = None).

compute

Calculates the positional correlation function for the given points. Will overwrite the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference points used in computation.

  • ref_orientations ((\(N_{particles}\), 1) or (\(N_{particles}\),) numpy.ndarray) – Reference orientations as angles used in computation.

  • points ((\(N_{particles}\), 3) numpy.ndarray, optional) – Points used in computation. Uses ref_points if not provided or None.

  • orientations ((\(N_{particles}\), 1) or (\(N_{particles}\),) numpy.ndarray, optional) – Orientations as angles used in computation. Uses ref_orientations if not provided or None.

  • nlist (freud.locality.NeighborList, optional) – NeighborList used to find bonds (Default value = None).

reset

Resets the values of the PCF histograms in memory.

PMFT \(\left(x, y\right)\)
class freud.pmft.PMFTXY2D(x_max, y_max, n_x, n_y)

Computes the PMFT [vanAndersKlotsa2014] [vanAndersAhmed2014] in coordinates \(x\), \(y\) listed in the X and Y arrays.

The values of \(x\) and \(y\) at which to compute the PCF are controlled by x_max, y_max, n_x, and n_y parameters to the constructor. The x_max and y_max parameters determine the minimum/maximum distance at which to compute the PCF and n_x and n_y are the number of bins in \(x\) and \(y\).

Note

2D: freud.pmft.PMFTXY2D is only defined for 2D systems. The points must be passed in as [x, y, 0]. Failing to set z=0 will lead to undefined behavior.

Module author: Eric Harper <harperic@umich.edu>

Module author: Vyas Ramasubramani <vramasub@umich.edu>

Parameters
  • x_max (float) – Maximum \(x\) distance at which to compute the PMFT.

  • y_max (float) – Maximum \(y\) distance at which to compute the PMFT.

  • n_x (unsigned int) – Number of bins in \(x\).

  • n_y (unsigned int) – Number of bins in \(y\).

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • bin_counts (\(\left(N_{y}, N_{x}\right)\) numpy.ndarray) – Bin counts.

  • PCF (\(\left(N_{y}, N_{x}\right)\) numpy.ndarray) – The positional correlation function.

  • PMFT (\(\left(N_{y}, N_{x}\right)\) numpy.ndarray) – The potential of mean force and torque.

  • r_cut (float) – The cutoff used in the cell list.

  • X (\(\left(N_{x}\right)\) numpy.ndarray) – The array of \(x\)-values for the PCF histogram.

  • Y (\(\left(N_{y}\right)\) numpy.ndarray) – The array of \(y\)-values for the PCF histogram.

  • jacobian (float) – The Jacobian used in the PMFT.

  • n_bins_X (unsigned int) – The number of bins in the \(x\)-dimension of the histogram.

  • n_bins_Y (unsigned int) – The number of bins in the \(y\)-dimension of the histogram.

accumulate

Calculates the positional correlation function and adds to the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference points used in computation.

  • ref_orientations ((\(N_{particles}\), 1) or (\(N_{particles}\),) numpy.ndarray) – Reference orientations as angles used in computation.

  • points ((\(N_{particles}\), 3) numpy.ndarray, optional) – Points used in computation. Uses ref_points if not provided or None.

  • orientations ((\(N_{particles}\), 1) or (\(N_{particles}\),) numpy.ndarray, optional) – Orientations as angles used in computation. Uses ref_orientations if not provided or None.

  • nlist (freud.locality.NeighborList, optional) – NeighborList used to find bonds (Default value = None).

compute

Calculates the positional correlation function for the given points. Will overwrite the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference points used in computation.

  • ref_orientations ((\(N_{particles}\), 1) or (\(N_{particles}\),) numpy.ndarray) – Reference orientations as angles used in computation.

  • points ((\(N_{particles}\), 3) numpy.ndarray, optional) – Points used in computation. Uses ref_points if not provided or None.

  • orientations ((\(N_{particles}\), 1) or (\(N_{particles}\),) numpy.ndarray, optional) – Orientations as angles used in computation. Uses ref_orientations if not provided or None.

  • nlist (freud.locality.NeighborList, optional) – NeighborList used to find bonds (Default value = None).

plot

Plot PMFTXY2D.

Parameters

ax (matplotlib.axes.Axes) – Axis to plot on. If None, make a new figure and axis. (Default value = None)

Returns

Axis with the plot.

Return type

(matplotlib.axes.Axes)

reset

Resets the values of the PCF histograms in memory.

PMFT \(\left(x, y, \theta\right)\)
class freud.pmft.PMFTXYT(x_max, y_max, n_x, n_y, n_t)

Computes the PMFT [vanAndersKlotsa2014] [vanAndersAhmed2014] for systems described by coordinates \(x\), \(y\), \(\theta\) listed in the X, Y, and T arrays.

The values of \(x, y, \theta\) at which to compute the PCF are controlled by x_max, y_max, and n_x, n_y, n_t parameters to the constructor. The x_max and y_max parameters determine the minimum/maximum \(x, y\) values (\(\min \left(\theta \right) = 0\), (\(\max \left( \theta \right) = 2\pi\)) at which to compute the PCF and n_x, n_y, n_t are the number of bins in \(x, y, \theta\).

Note

2D: freud.pmft.PMFTXYT is only defined for 2D systems. The points must be passed in as [x, y, 0]. Failing to set z=0 will lead to undefined behavior.

Module author: Eric Harper <harperic@umich.edu>

Module author: Vyas Ramasubramani <vramasub@umich.edu>

Parameters
  • x_max (float) – Maximum \(x\) distance at which to compute the PMFT.

  • y_max (float) – Maximum \(y\) distance at which to compute the PMFT.

  • n_x (unsigned int) – Number of bins in \(x\).

  • n_y (unsigned int) – Number of bins in \(y\).

  • n_t (unsigned int) – Number of bins in \(\theta\).

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • bin_counts (\(\left(N_{\theta}, N_{y}, N_{x}\right)\) numpy.ndarray) – Bin counts.

  • PCF (\(\left(N_{\theta}, N_{y}, N_{x}\right)\) numpy.ndarray) – The positional correlation function.

  • PMFT (\(\left(N_{\theta}, N_{y}, N_{x}\right)\) numpy.ndarray) – The potential of mean force and torque.

  • r_cut (float) – The cutoff used in the cell list.

  • X (\(\left(N_{x}\right)\) numpy.ndarray) – The array of \(x\)-values for the PCF histogram.

  • Y (\(\left(N_{y}\right)\) numpy.ndarray) – The array of \(y\)-values for the PCF histogram.

  • T (\(\left(N_{\theta}\right)\) numpy.ndarray) – The array of \(\theta\)-values for the PCF histogram.

  • jacobian (float) – The Jacobian used in the PMFT.

  • n_bins_X (unsigned int) – The number of bins in the \(x\)-dimension of the histogram.

  • n_bins_Y (unsigned int) – The number of bins in the \(y\)-dimension of the histogram.

  • n_bins_T (unsigned int) – The number of bins in the \(\theta\)-dimension of the histogram.

accumulate

Calculates the positional correlation function and adds to the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference points used in computation.

  • ref_orientations ((\(N_{particles}\), 1) or (\(N_{particles}\),) numpy.ndarray) – Reference orientations as angles used in computation.

  • points ((\(N_{particles}\), 3) numpy.ndarray, optional) – Points used in computation. Uses ref_points if not provided or None.

  • orientations ((\(N_{particles}\), 1) or (\(N_{particles}\),) numpy.ndarray, optional) – Orientations as angles used in computation. Uses ref_orientations if not provided or None.

  • nlist (freud.locality.NeighborList, optional) – NeighborList used to find bonds (Default value = None).

compute

Calculates the positional correlation function for the given points. Will overwrite the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference points used in computation.

  • ref_orientations ((\(N_{particles}\), 1) or (\(N_{particles}\),) numpy.ndarray) – Reference orientations as angles used in computation.

  • points ((\(N_{particles}\), 3) numpy.ndarray, optional) – Points used in computation. Uses ref_points if not provided or None.

  • orientations ((\(N_{particles}\), 1) or (\(N_{particles}\),) numpy.ndarray, optional) – Orientations as angles used in computation. Uses ref_orientations if not provided or None.

  • nlist (freud.locality.NeighborList, optional) – NeighborList used to find bonds (Default value = None).

reset

Resets the values of the PCF histograms in memory.

PMFT \(\left(x, y, z\right)\)
class freud.pmft.PMFTXYZ(x_max, y_max, z_max, n_x, n_y, n_z)

Computes the PMFT [vanAndersKlotsa2014] [vanAndersAhmed2014] in coordinates \(x\), \(y\), \(z\), listed in the X, Y, and Z arrays.

The values of \(x, y, z\) at which to compute the PCF are controlled by x_max, y_max, z_max, n_x, n_y, and n_z parameters to the constructor. The x_max, y_max, and z_max parameters] determine the minimum/maximum distance at which to compute the PCF and n_x, n_y, and n_z are the number of bins in \(x, y, z\).

Note

3D: freud.pmft.PMFTXYZ is only defined for 3D systems. The points must be passed in as [x, y, z].

Module author: Eric Harper <harperic@umich.edu>

Module author: Vyas Ramasubramani <vramasub@umich.edu>

Parameters
  • x_max (float) – Maximum \(x\) distance at which to compute the PMFT.

  • y_max (float) – Maximum \(y\) distance at which to compute the PMFT.

  • z_max (float) – Maximum \(z\) distance at which to compute the PMFT.

  • n_x (unsigned int) – Number of bins in \(x\).

  • n_y (unsigned int) – Number of bins in \(y\).

  • n_z (unsigned int) – Number of bins in \(z\).

  • shiftvec (list) – Vector pointing from [0, 0, 0] to the center of the PMFT.

Variables
  • box (freud.box.Box) – Box used in the calculation.

  • bin_counts (\(\left(N_{z}, N_{y}, N_{x}\right)\) numpy.ndarray) – Bin counts.

  • PCF (\(\left(N_{z}, N_{y}, N_{x}\right)\) numpy.ndarray) – The positional correlation function.

  • PMFT (\(\left(N_{z}, N_{y}, N_{x}\right)\) numpy.ndarray) – The potential of mean force and torque.

  • r_cut (float) – The cutoff used in the cell list.

  • X (\(\left(N_{x}\right)\) numpy.ndarray) – The array of \(x\)-values for the PCF histogram.

  • Y (\(\left(N_{y}\right)\) numpy.ndarray) – The array of \(y\)-values for the PCF histogram.

  • Z (\(\left(N_{z}\right)\) numpy.ndarray) – The array of \(z\)-values for the PCF histogram.

  • jacobian (float) – The Jacobian used in the PMFT.

  • n_bins_X (unsigned int) – The number of bins in the \(x\)-dimension of the histogram.

  • n_bins_Y (unsigned int) – The number of bins in the \(y\)-dimension of the histogram.

  • n_bins_Z (unsigned int) – The number of bins in the \(z\)-dimension of the histogram.

accumulate

Calculates the positional correlation function and adds to the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference points used in computation.

  • ref_orientations ((\(N_{particles}\), 4) numpy.ndarray) – Reference orientations as quaternions used in computation.

  • points ((\(N_{particles}\), 3) numpy.ndarray, optional) – Points used in computation. Uses ref_points if not provided or None.

  • orientations ((\(N_{particles}\), 4) numpy.ndarray, optional) – Orientations as quaternions used in computation. Uses ref_orientations if not provided or None.

  • face_orientations ((\(N_{particles}\), 4) numpy.ndarray, optional) – Orientations of particle faces to account for particle symmetry. If not supplied by user, unit quaternions will be supplied. If a 2D array of shape (\(N_f\), 4) or a 3D array of shape (1, \(N_f\), 4) is supplied, the supplied quaternions will be broadcast for all particles. (Default value = None).

  • nlist (freud.locality.NeighborList, optional) – NeighborList used to find bonds (Default value = None).

compute

Calculates the positional correlation function for the given points. Will overwrite the current histogram.

Parameters
  • box (freud.box.Box) – Simulation box.

  • ref_points ((\(N_{particles}\), 3) numpy.ndarray) – Reference points used in computation.

  • ref_orientations ((\(N_{particles}\), 4) numpy.ndarray) – Reference orientations as quaternions used in computation.

  • points ((\(N_{particles}\), 3) numpy.ndarray, optional) – Points used in computation. Uses ref_points if not provided or None.

  • orientations ((\(N_{particles}\), 4) numpy.ndarray, optional) – Orientations as quaternions used in computation. Uses ref_orientations if not provided or None.

  • face_orientations ((\(N_{particles}\), 4) numpy.ndarray, optional) – Orientations of particle faces to account for particle symmetry. If not supplied by user, unit quaternions will be supplied. If a 2D array of shape (\(N_f\), 4) or a 3D array of shape (1, \(N_f\), 4) is supplied, the supplied quaternions will be broadcast for all particles. (Default value = None).

  • nlist (freud.locality.NeighborList, optional) – NeighborList used to find bonds (Default value = None).

reset

Resets the values of the PCF histograms in memory.

Voronoi Module

Overview

freud.voronoi.Voronoi

Compute the Voronoi tessellation of a 2D or 3D system using qhull.

Details

The freud.voronoi module contains tools to characterize Voronoi cells of a system.

class freud.voronoi.Voronoi(box, buff)

Compute the Voronoi tessellation of a 2D or 3D system using qhull. This uses scipy.spatial.Voronoi, accounting for periodic boundary conditions.

Module author: Benjamin Schultz <baschult@umich.edu>

Module author: Yina Geng <yinageng@umich.edu>

Module author: Mayank Agrawal <amayank@umich.edu>

Module author: Bradley Dice <bdice@bradleydice.com>

Since qhull does not support periodic boundary conditions natively, we expand the box to include a portion of the particles’ periodic images. The buffer width is given by the parameter buff. The computation of Voronoi tessellations and neighbors is only guaranteed to be correct if buff >= L/2 where L is the longest side of the simulation box. For dense systems with particles filling the entire simulation volume, a smaller value for buff is acceptable. If the buffer width is too small, then some polytopes may not be closed (they may have a boundary at infinity), and these polytopes’ vertices are excluded from the list. If either the polytopes or volumes lists that are computed is different from the size of the array of positions used in the freud.voronoi.Voronoi.compute() method, try recomputing using a larger buffer width.

Parameters
Variables
  • buffer (float) – Buffer width.

  • nlist (NeighborList) – Returns a weighted neighbor list. In 2D systems, the bond weight is the “ridge length” of the Voronoi boundary line between the neighboring particles. In 3D systems, the bond weight is the “ridge area” of the Voronoi boundary polygon between the neighboring particles.

  • polytopes (list[numpy.ndarray]) – List of arrays, each containing Voronoi polytope vertices.

  • volumes ((\(\left(N_{cells} \right)\)) numpy.ndarray) – Returns an array of volumes (areas in 2D) corresponding to Voronoi cells.

compute

Compute Voronoi diagram.

Parameters
  • positions ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate Voronoi diagram for.

  • box (freud.box.Box) – Simulation box (Default value = None).

  • buff (float) – Buffer distance within which to look for images (Default value = None).

computeNeighbors

Compute the neighbors of each particle based on the Voronoi tessellation. One can include neighbors from multiple Voronoi shells by specifying numShells in getNeighbors(). An example of computing neighbors from the first two Voronoi shells for a 2D mesh is shown below.

Retrieve the results with getNeighbors().

Example:

from freud import box, voronoi
import numpy as np
vor = voronoi.Voronoi(box.Box(5, 5, is2D=True))
pos = np.array([[0, 0, 0], [0, 1, 0], [0, 2, 0],
                [1, 0, 0], [1, 1, 0], [1, 2, 0],
                [2, 0, 0], [2, 1, 0], [2, 2, 0]], dtype=np.float32)
first_shell = vor.computeNeighbors(pos).getNeighbors(1)
second_shell = vor.computeNeighbors(pos).getNeighbors(2)
print('First shell:', first_shell)
print('Second shell:', second_shell)

Note

Input positions must be a 3D array. For 2D, set the z value to 0.

Parameters
  • positions ((\(N_{particles}\), 3) numpy.ndarray) – Points to calculate Voronoi diagram for.

  • box (freud.box.Box) – Simulation box (Default value = None).

  • buff (float) – Buffer distance within which to look for images (Default value = None).

  • exclude_ii (bool, optional) – True if pairs of points with identical indices should be excluded (Default value = True).

computeVolumes

Computes volumes (areas in 2D) of Voronoi cells.

New in version 0.8.

Must call freud.voronoi.Voronoi.compute() before this method. Retrieve the results with the volumes attribute.

getNeighbors

Get well-sorted neighbors from cumulative Voronoi shells for each particle by specifying numShells.

Must call computeNeighbors() before this method.

Parameters

numShells (int) – Number of neighbor shells.

plot

Plot Voronoi diagram.

Parameters

ax (matplotlib.axes.Axes) – Axis to plot on. If None, make a new figure and axis. (Default value = None)

Returns

Axis with the plot.

Return type

(matplotlib.axes.Axes)

Development Guide

Contributions to freud are highly encouraged. The pages below offer information about freud’s design goals and how to contribute new modules.

Design Principles

Vision

The freud library is designed to be a powerful and flexible library for the analysis of simulation output. To support a variety of analysis routines, freud places few restrictions on its components. The primary requirement for an analysis routine in freud is that it should be substantially computationally intensive so as to require coding up in C++: all freud code should be composed of fast C++ routines operating on systems of particles in periodic boxes. To remain easy-to-use, all C++ modules should be wrapped in Python code so they can be easily accessed from Python scripts or through a Python interpreter.

In order to achieve this goal, freud takes the following viewpoints:

  • In order to remain as agnostic to inputs as possible, freud makes no attempt to interface directly with simulation software. Instead, freud works directly with NumPy <http://www.numpy.org/>_ arrays to retain maximum flexibility.

  • For ease of maintenance, freud uses Git for version control; GitHub for code hosting and issue tracking; and the PEP 8 standard for code, stressing explicitly written code which is easy to read.

  • To ensure correctness, freud employs unit testing using the Python unittest framework. In addition, freud utilizes CircleCI for continuous integration to ensure that all of its code works correctly and that any changes or new features do not break existing functionality.

Language choices

The freud library is written in two languages: Python and C++. C++ allows for powerful, fast code execution while Python allows for easy, flexible use. Intel Threading Building Blocks parallelism provides further power to C++ code. The C++ code is wrapped with Cython, allowing for user interaction in Python. NumPy provides the basic data structures in freud, which are commonly used in other Python plotting libraries and packages.

Unit Tests

All modules should include a set of unit tests which test the correct behavior of the module. These tests should be simple and short, testing a single function each, and completing as quickly as possible (ideally < 10 sec, but times up to a minute are acceptable if justified).

Benchmarks

Modules can be benchmarked in the following way. The following code is an example benchmark for the freud.density.RDF module.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
import numpy as np
import freud
from benchmark import Benchmark
from benchmarker import run_benchmarks


class BenchmarkDensityRDF(Benchmark):
    def __init__(self, rmax, dr, rmin):
        self.rmax = rmax
        self.dr = dr
        self.rmin = rmin

    def bench_setup(self, N):
        self.box_size = self.rmax*3.1
        np.random.seed(0)
        self.points = np.random.random_sample((N, 3)).astype(np.float32) \
            * self.box_size - self.box_size/2
        self.rdf = freud.density.RDF(self.rmax, self.dr, rmin=self.rmin)
        self.box = freud.box.Box.cube(self.box_size)

    def bench_run(self, N):
        self.rdf.accumulate(self.box, self.points)
        self.rdf.compute(self.box, self.points)


def run():
    Ns = [1000, 10000]
    rmax = 10.0
    dr = 1.0
    rmin = 0
    number = 100
    name = 'freud.density.RDF'
    classobj = BenchmarkDensityRDF

    return run_benchmarks(name, Ns, number, classobj,
                          rmax=rmax, dr=dr, rmin=rmin)


if __name__ == '__main__':
    run()

in a file benchmark_density_RDF.py in the benchmarks directory. More examples can be found in the benchmarks directory. The runtime of BenchmarkDensityRDF.bench_run will be timed for number of times on the input sizes of Ns. Its runtime with respect to the number of threads will also be measured. Benchmarks are run as a part of continuous integration, with performance comparisons between the current commit and the master branch.

Make Execution Explicit

While it is tempting to make your code do things “automatically”, such as have a calculate method find all _calc methods in a class, call them, and add their returns to a dictionary to return to the user, it is preferred in freud to execute code explicitly. This helps avoid issues with debugging and undocumented behavior:

# this is bad
class SomeFreudClass(object):
    def __init__(self, **kwargs):
        for key in kwargs.keys:
            setattr(self, key, kwargs[key])

# this is good
class SomeOtherFreudClass(object):
    def __init__(self, x=None, y=None):
        self.x = x
        self.y = y
Code Duplication

When possible, code should not be duplicated. However, being explicit is more important. In freud this translates to many of the inner loops of functions being very similar:

// somewhere deep in function_a
for (int i = 0; i < n; i++)
    {
    vec3[float] pos_i = position[i];
    for (int j = 0; j < n; j++)
        {
        pos_j = = position[j];
        // more calls here
        }
    }

// somewhere deep in function_b
for (int i = 0; i < n; i++)
    {
    vec3[float] pos_i = position[i];
    for (int j = 0; j < n; j++)
        {
        pos_j = = position[j];
        // more calls here
        }
    }

While it might be possible to figure out a way to create a base C++ class all such classes inherit from, run through positions, call a calculation, and return, this would be rather complicated. Additionally, any changes to the internals of the code, and may result in performance penalties, difficulty in debugging, etc. As before, being explicit is better.

However, if you have a class which has a number of methods, each of which requires the calling of a function, this function should be written as its own method (instead of being copy-pasted into each method) as is typical in object-oriented programming.

Python vs. Cython vs. C++

The freud library is meant to leverage the power of C++ code imbued with parallel processing power from TBB with the ease of writing Python code. The bulk of your calculations should take place in C++, as shown in the snippet below:

# this is bad
def badHeavyLiftingInPython(positions):
    # check that positions are fine
    for i, pos_i in enumerate(positions):
        for j, pos_j in enumerate(positions):
            if i != j:
                r_ij = pos_j - pos_i
                # ...
                computed_array[i] += some_val
    return computed_array

# this is good
def goodHeavyLiftingInCPlusPlus(positions):
    # check that positions are fine
    cplusplus_heavy_function(computed_array, positions, len(pos))
    return computed_array

In the C++ code, implement the heavy lifting function called above from Python:

void cplusplus_heavy_function(float* computed_array,
                              float* positions,
                              int n)
    {
    for (int i = 0; i < n; i++)
        {
        for (int j = 0; j < n; j++)
            {
            if (i != j)
                {
                r_ij = pos_j - pos_i;
                // ...
                computed_array[i] += some_val;
                }
            }
        }
    }

Some functions may be necessary to write at the Python level due to a Python library not having an equivalent C++ library, complexity of coding, etc. In this case, the code should be written in Cython and a reasonable attempt to optimize the code should be made.

Source Code Conventions

The guidelines below should be followed for any new code added to freud.


Naming Conventions

The following conventions should apply to Python, Cython, and C++ code.

  • Variable names use lower_case_with_underscores

  • Function and method names use lowerCaseWithNoUnderscores

  • Class names use CapWords


Indentation
  • Spaces, not tabs, must be used for indentation

  • 4 spaces are required per level of indentation and continuation lines


Python

Code in freud should follow PEP 8, as well as the following guidelines. Anything listed here takes precedence over PEP 8, but try to deviate as little as possible from PEP 8. When in doubt, follow these guidelines over PEP 8.

During continuous integration (CI), all Python and Cython code in freud is tested with flake8 to ensure PEP 8 compliance. It is strongly recommended to set up a pre-commit hook to ensure code is compliant before pushing to the repository:

flake8 --install-hook git
git config --bool flake8.strict true
Source
  • All code should be contained in Cython files

  • Python .py files are reserved for module level docstrings and minor miscellaneous tasks for, e.g, backwards compatibility.

  • Semicolons should not be used to mark the end of lines in Python.

Documentation Comments
  • Documentation is generated using sphinx.

  • The documentation should be written according to the Google Python Style Guide.

  • A few specific notes:

    • The shapes of NumPy arrays should be documented as part of the type in the following manner:

      points ((:math:`N_{points}`, 3) :class:`numpy.ndarray`):
      
    • Constructors should be documented at the class level.

    • Class attributes (including properties) should be documented as class attributes within the class-level docstring.

    • Optional arguments should be documented as such within the type after the actual type, and the default value should be included within the description:

      box (:class:`freud.box.Box`, optional): Simulation box (Default value = None).
      
    • Properties that are settable should be documented the same way as optional arguments: Lx (float, settable): Length in x.

  • All docstrings should be contained within the Cython files.

  • If you copy an existing file as a template, make sure to modify the comments to reflect the new file.

  • Docstrings should demonstrate how to use the code with an example. Liberal addition of examples is encouraged.


C++

C++ code should follow the result of running clang-format-6.0 with the style specified in the file .clang-format. Please refer to Clang Format 6 for details.

When in doubt, run clang-format -style=file FILE_WITH_YOUR_CODE in the top directory of the freud repository. If installing clang-format is not a viable option, the check-style step of continuous integration (CI) contains the information on the correctness of the style.

Source
  • TBB sections should use lambdas, not functors (see this tutorial).

void someFunction(float some_var, float other_var)
{
    // code before parallel section
    parallel_for(blocked_range<size_t>(0, n), [=](const blocked_range<size_t>& r) {
        // do stuff
    });
}
Documentation Comments
  • Add explanatory comments throughout your code.

How to Add New Code

This document details the process of adding new code into freud.

Does my code belong in freud?

The freud library is not meant to simply wrap or augment external Python libraries. A good rule of thumb is if the code I plan to write does not require C++, it does not belong in freud. There are, of course, exceptions.

Create a new branch

You should branch your code from master into a new branch. Do not add new code directly into the master branch.

Add a New Module

If the code you are adding is in a new module, not an existing module, you must do the following:

  • Create cpp/moduleName folder

  • Edit freud/__init__.py

    • Add from . import moduleName so that your module is imported by default.

  • Edit freud/_freud.pyx

    • Add include "moduleName.pxi". This must be done to have freud include your Python-level code.

  • Create freud/moduleName.pxi file

    • This will house the python-level code.

    • If you have a .pxd file exposing C++ classes, make sure to import that:

cimport freud._moduleName as moduleName
  • Create freud/moduleName.py file

    • Make sure there is an import for each C++ class in your module:

from ._freud import MyC++Class
  • Create freud/_moduleName.pxd

    • This file will expose the C++ classes in your module to python.

  • Edit setup.py

    • Add cpp/moduleName to the includes list.

    • If there are any helper cc files that will not have a corresponding Cython class, add those files to the sources list inside the extensions list.

  • Add line to doc/source/modules.rst

    • Make sure your new module is referenced in the documentation.

  • Create doc/source/moduleName.rst

Add to an Existing Module

To add a new class to an existing module, do the following:

  • Create cpp/moduleName/SubModule.h and cpp/moduleName/SubModule.cc

    • New classes should be grouped into paired .h, .cc files. There may be a few instances where new classes could be added to an existing .h, .cc pairing.

  • Edit freud/moduleName.py file

    • Add a line for each C++ class in your module:

from ._freud import MyC++Class
  • Expose C++ class in freud/_moduleName.pxd

  • Create Python interface in freud/moduleName.pxi

You must include sphinx-style documentation and unit tests.

  • Add extra documentation to doc/source/moduleName.rst

  • Add unit tests to freud/tests

References and Citations

Matplotlib

Hunter, J. D. (2007). Matplotlib: A 2D Graphics Environment. Computing in Science & Engineering, 9 (3), 90-95. https://doi.org/10.1109/MCSE.2007.55

Bokeh

Bokeh Development Team (2018). Bokeh: Python library for interactive visualization. https://bokeh.pydata.org

HajiAkbari2015

Haji-Akbari, A., & Glotzer, S. C. (2015). Strong orientational coordinates and orientational order parameters for symmetric objects. Journal of Physics A: Mathematical and Theoretical, 48. https://doi.org/10.1088/1751-8113/48/48/485201

vanAndersKlotsa2014

van Anders, G., Klotsa, D., Ahmed, N. K., Engel, M., & Glotzer, S. C. (2014). Understanding shape entropy through local dense packing. Proceedings of the National Academy of Sciences, 111 (45), E4812–E4821. https://doi.org/10.1073/pnas.1418159111

vanAndersAhmed2014

van Anders, G., Ahmed, N. K., Smith, R., Engel, M., & Glotzer, S. C. (2014). Entropically patchy particles: Engineering valence through shape entropy. ACS Nano, 8 (1), 931–940. https://doi.org/10.1021/nn4057353

Lechner2008

Lechner, W., & Dellago, C. (2008). Accurate determination of crystal structures based on averaged local bond order parameters. Journal of Chemical Physics, 129 (11). https://doi.org/10.1063/1.2977970

Steinhardt1983

Steinhardt, P.J., Nelson, D.R., & Ronchetti, M. (1983). Bond-orientational order in liquids and glasses. Phys. Rev. B 28 (784). https://doi.org/10.1103/PhysRevB.28.784

Calandrini2011

Calandrini, V., Pellegrini, E., Calligari, P., Hinsen, K., & Kneller, G. R. (2011). nMoldyn-Interfacing spectroscopic experiments, molecular dynamics simulations and models for time correlation functions. École thématique de la Société Française de la Neutronique, 12, 201-232. https://doi.org/10.1051/sfn/201112010

License

BSD 3-Clause License for freud

Copyright (c) 2010-2019 The Regents of the University of Michigan
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice,
   this list of conditions and the following disclaimer.

2. Redistributions in binary form must reproduce the above copyright notice,
   this list of conditions and the following disclaimer in the documentation
   and/or other materials provided with the distribution.

3. Neither the name of the copyright holder nor the names of its contributors
   may be used to endorse or promote products derived from this software without
   specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Credits

freud Developers

The following people contributed to the development of freud.

Eric Harper, University of Michigan - Former lead developer

  • TBB parallelism.

  • PMFT module.

  • NearestNeighbors.

  • RDF.

  • Bonding module (since removed).

  • Cubatic order parameter.

  • Hexatic order parameter.

  • Pairing2D (since removed).

Joshua A. Anderson, University of Michigan - Creator

  • Initial design and implementation.

  • IteratorLinkCell.

  • LinkCell.

  • Various density modules.

  • freud.parallel.

  • Indexing modules.

  • cluster.pxi.

Matthew Spellings - Former lead developer

  • Added generic neighbor list.

  • Enabled neighbor list usage across freud modules.

  • Correlation functions.

  • LocalDescriptors class.

  • interface.pxi.

Erin Teich

  • Wrote environment matching module.

  • BondOrder (with Julia Dshemuchadse).

  • Angular separation (with Andrew Karas).

  • Contributed to LocalQl development.

  • Wrote LocalBondProjection module.

  1. Eric Irrgang

  • Authored (now removed) kspace code.

  • Numerous bug fixes.

  • Various contributions to freud.shape.

Chrisy Du

  • Authored all Steinhardt order parameters.

  • Fixed support for triclinic boxes.

Antonio Osorio

  • Developed TrajectoryXML class.

  • Various bug fixes.

  • OpenMP support.

Vyas Ramasubramani - Lead developer

  • Ensured pep8 compliance.

  • Added CircleCI continuous integration support.

  • Create environment module and refactored order module.

  • Rewrote most of freud docs, including order, density, and environment modules.

  • Fixed nematic order parameter.

  • Add properties for accessing class members.

  • Various minor bug fixes.

  • Refactored PMFT code.

  • Refactored Steinhardt order parameter code.

  • Wrote numerous examples of freud usage.

  • Rewrote most of freud tests.

  • Replaced CMake-based installation with setup.py using Cython.

  • Add code coverage metrics.

  • Added support for installing from PyPI, including ensuring that NumPy is installed.

  • Converted all docstrings to Google format, fixed various incorrect docs.

  • Debugged and added rotational autocorrelation code.

  • Added MSD module.

Bradley Dice - Lead developer

  • Cleaned up various docstrings.

  • HexOrderParameter bug fixes.

  • Cleaned up testing code.

  • Bumpversion support.

  • Reduced all compile warnings.

  • Added Python interface for box periodicity.

  • Added Voronoi support for neighbor lists across periodic boundaries.

  • Added Voronoi weights for 3D.

  • Added Voronoi cell volume computation.

  • Incorporated internal BiMap class for Boost removal.

  • Wrote numerous examples of freud usage.

  • Added some freud tests.

  • Added ReadTheDocs support.

  • Rewrote interface module into pure Cython.

  • Proper box duck-typing.

  • Removed nose from unit testing.

  • Use lambda function for parallelizing CorrelationFunction with TBB.

  • Finalized boost removal.

Richmond Newman

  • Developed the freud box.

  • Solid liquid order parameter.

Carl Simon Adorf

  • Developed the python box module.

Jens Glaser

  • Wrote kspace.pxi front-end.

  • Modifications to kspace module.

  • Nematic order parameter.

Benjamin Schultz

  • Wrote Voronoi module.

  • Fix normalization in GaussianDensity.

  • Bugfixes in freud.shape.

Bryan VanSaders

  • Make Cython catch C++ exceptions.

  • Add shiftvec option to PMFT.

Ryan Marson

  • Various GaussianDensity bugfixes.

Yina Geng

  • Co-wrote Voronoi neighbor list module.

  • Add properties for accessing class members.

Carolyn Phillips

  • Initial design and implementation.

  • Package name.

Ben Swerdlow

  • Documentation and installation improvements.

James Antonaglia

  • Added number of neighbors as an argument to HexOrderParameter.

  • Bugfixes.

  • Analysis of deprecated kspace module.

Mayank Agrawal

  • Co-wrote Voronoi neighbor list module.

William Zygmunt

  • Helped with Boost removal.

Greg van Anders

  • Bugfixes for CMake and SSE2 installation instructions.

James Proctor

  • Cythonization of the cluster module.

Rose Cersonsky

  • Enabled TBB-parallelism in density module.

  • Fixed how C++ arrays were pulled into Cython.

Wenbo Shen

  • Translational order parameter.

Andrew Karas

  • Angular separation.

  • Wrote reference implementation for rotational autocorrelation.

Paul Dodd

  • Fixed CorrelationFunction namespace, added ComputeOCF class for TBB parallelization.

Tim Moore

  • Added optional rmin argument to density.RDF.

  • Enabled NeighborList indexing.

Alex Dutton

  • BiMap class for MatchEnv.

Matthew Palathingal

  • Replaced use of boost shared arrays with shared ptr in Cython.

  • Helped incorporate BiMap class into MatchEnv.

Kelly Wang

  • Enabled NeighborList indexing.

Yezhi Jin

  • Added support for 2D arrays in the Python interface to Box functions.

Source code

Eigen (http://eigen.tuxfamily.org/) is included as a git submodule in freud. Eigen is made available under the Mozilla Public License v.2.0 (http://mozilla.org/MPL/2.0/). Its linear algebra routines are used for various tasks including the computation of eigenvalues and eigenvectors.

fsph (https://bitbucket.org/glotzer/fsph) is included as a git submodule in freud. fsph is made available under the MIT license. It is used for the calculation of spherical harmonics, which are then used in the calculation of various order parameters, under the following license:

Copyright (c) 2016 The Regents of the University of Michigan

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Support and Contribution

Please visit our repository on GitHub for the library source code. Any issues or bugs may be reported at our issue tracker, while questions and discussion can be directed to our forum. All contributions to freud are welcomed via pull requests! Please see the development guide for more information on requirements for new code.

Indices and tables