1 - Overview

PM allows one to compute various electronic and phononic properties of crystals.

There are currently three major versions of the code.

  1. The “old” code: http://grandcentral.apam.columbia.edu:5555/principia_materia_doc_v0/

  2. The “new” code: http://grandcentral.apam.columbia.edu:5555/pmdocs/

  3. The “development” code, which is documented here.

2 - Installation

How you can obtain and install PM.

Principia Materia (PM) consists of Python codes with C++ and Fortran extensions. There will soon be a variety of options for installing PM, but we presently recommend cloning the git repository, creating a virtual environment, installing the required dependencies, and compiling the source code. We also provide the option of having auto-completion at the Bash shell using argcomplete. Please note that Python 3.7 or later is required to install and use PM.

Obtaining the source code

Clone the git repository:

  git clone git@github.com:marianettigroup/principia-materia.git

Using a Virtual Environment

A virtual environment is recommended when working with PM. To setup a virtual environment:

  python -m venv ENVNAME 

(replace ENVNAME with the name you choose for the environment).

To launch the environment, issue the following command:

  source ENVNAME/bin/activate

This virtual environment will be launched whenever running PM.

Installing external dependencies

PM depends on several external libraries: openmp, lapack, blas, gcc, gfortran

  • For Ubuntu 18.04 and later with gcc compilers
      sudo apt install gcc gfortran libopenblas-dev libgomp1 libhdf5-dev
  • For macOS with Homebrew
      brew install gcc openblas

Required python modules for PM are listed in the file requirements.txt, and can be installed using pip:

Installing PM

On both Ubuntu and macOS with Homebrew, pip can be used to install python dependencies, compile fortran, and install PM. The installation step can be executed with either the regular mode:

      pip install .

or in editable mode:

      pip install -e .

In editable mode, the repository is dynamically link to serve as the python module so that any changes to the code come into effect without reinstalling the module. However for the extensions of the module that are written in FORTRAN, the module requires recompilation to reflect any changes to the source code. We currently recommend installing in editable mode, given that many changes are still being made to the code. In some situations, it is useful to install using pip install -e . --no-build-isolation.

Setup Shell Auto-Complete

The auto-complete feature is optional, and is realized with the argcomplete module. To activate this feature in Bash, for example, install the module first:

   pip install argcomplete
   activate-global-python-argcomplete

Then within the virtual environment, source the pm-completion.sh script in activation script ENVNAME/bin/activate:

  # append to ENVNAME/bin/activate
  source ${VIRTUAL_ENV}/../pm-completion.sh

Restart the environment to activate changes.

3 - Docsy Documentation

How to contribute to the documentation of PM.

The documentation for PM is generated using Hugo with the Docsy theme. The main advangtage of Hugo is how fast it is, and you can basically see your changes to the documentation in real time when viewing the site on a local server. The Docsy theme brings all of the advanced features that are needed for technical documentation, including detailed support for code markup (via prism), latex support (via katex), etc.

This documentation is stored in the github repository principia-materia-docsy. When you push to the github repository, the site will automatically be built using github actions and served at https://marianettigroup.github.io/. If you want to push to github without rebuilding, include [skip actions] in your commit message.

To view the documentation locally, start by installing hugo; snap is probably the most convenient way:

sudo snap install hugo

Now change into the directory of your local pm-docsy repo, and start the hugo server:

hugo server 

You can then view the docs by pointing your browser to the local port that hugo prints out, which is normally http://localhost:1313/.

If you want to add to the documentation, you will find the markdown source files in the repo here: content/en/docs/.

The command line input tags are documented in the following yaml file: data/input_tag_info.yaml. This file is automatically generated by the Data Aggregator module. There is a command line interface command which can regenerate the yaml file:

$ pm-get_variable_doc

- name: basis_atoms
  description: Basis atoms of the crystal structure.
  type: dict-float-array
  dimension: scalar \( \leftrightarrow N_{j}\times3\) where \(j\) labels species of atom.

4 - Command Line Functionality

Executing core functionality of PM from the command line.

Most key tasks can be performed from the command line, and here we document all such functionality.

4.1 - Input tags

Input tags used by PM from the command line.

List of Input Tags

lattice_vectors, basis_atoms,

lattice_vectors

Testing 1 2 3

type:real, matrix
dimension:3x3
scripts:xx, yy

type: matrix

dimension: 3x3

basis_atoms

NameDescription
lattice_vectorsLattice vectors in three dimensions, specified by a row-stacked 3x3 matrix.
basis_atomsBasis atoms of crystal structure.
supaA matrix of integers which left multiplies lattice_vectors to create a superlattice.
axial_strainA vector of three floats which specifies the axial strain.
strainA 3x3 real, symmetric matrix which specifies a general state of strain.
point_groupA string specifying the point group in Schoenfiles notation.
kpointA three dimensional vector specifying a kpoint.
qpointA three dimensional vector specifying a qpoint.
kpoint_meshA vector of three integers, specifying the kpoint mesh for electrons.
qpoint_meshA vector of three integers, specifying the qpoint mesh for phonons.
kpoint_pathA row stacked matrix of kpoints used to create a path through the BZ for plotting electronic bands.
qpoint_pathA row stacked matrix of qpoints used to create a path through the BZ for plotting phononic bands.
force_constantsA dictionary with keys being three dimensional vectors and values being NxN matrices, where N is the number of degrees of freedom in the unit cell.

Tags with vector or matrix values

Many variables will be vectors or matrices, and all such variables will be processed by our text parser. Below are several examples which illustrate the variety of different ways that vectors may be input.

lattice_vectors: 
  [[0,0.5,0.5],
   [0.5,0,0.5],
   [0.5,0.5,0]]

lattice_vectors: |
   0 1/2 1/2
   1/2 0 1/2
   1/2 1/2 0
  
lattice_vectors: 0 1/2 1/2; 1/2 0 1/2; 1/2 1/2 0

4.2 - Array Parser

Entering vectors or matrices at command line or in a YAML file.

For many different tasks performed in the PM suite, a vector or matrix must be provided at the command line or in a yaml file. PM has an array parser which allows substantial flexibility to facilitate this (see parse_array if interested in the details).

Consider entering the lattice vectors tag. All of the below are valid syntax. The parser excepts delimiters: line break, semi-colon, comma, space, where line break takes highest position. If only one delimiter is present, the result will be a vector, while two delimiters yields a matrix, etc.

lattice_vectors: 
  [[0.000000,2.764309,2.764309],
   [2.764309,0.000000,2.764309],
   [2.764309,2.764309,0.000000]]

lattice_vectors: |
   0   1/2 1/2
   1/2 0   1/2
   1/2 1/2 0

lattice_vectors: 0 1/2 1/2 ; 1/2 0 1/2 ; 1/2 1/2 0
lattice_vectors: 0 1/2 1/2 , 1/2 0 1/2 , 1/2 1/2 0
lattice_vectors: 0,1/2,1/2 ; 1/2,0,1/2 ; 1/2,1/2,0
lattice_vectors: 0,1/2,1/2;1/2,0,1/2;1/2,1/2,0

lattice_vectors: |
    a=1.8 c=10
    a*r3/2  a/2 0
    a*r3/2 -a/2 0
    0       0   c

In the last example, a substition is being performed, where the first line contains variable definitions.

One test this on the command line as follows:

$ pm-lattice --lattice-vectors '1 0 0 ; 0 1 0 ; 0 0 1' --print-attribute lattice_vectors
lattice_vectors:
  [[ 1.00000000,  0.00000000,  0.00000000],
   [ 0.00000000,  1.00000000,  0.00000000],
   [ 0.00000000,  0.00000000,  1.00000000]]

4.3 - Prototype Crystals

A collection of sample crystals.

In order to test out the code, it is useful to have sample crystals. At present, there is a script called pm-prototype-xtal which contains such data, and the information can be seen via tab completely print-attribute. This is only meant to be temporary until we have a more polished solution.

$ pm-prototype-xtal --print-attribute perovskite

lattice_vectors:
  [[ 3.91300000,  0.00000000,  0.00000000],
   [ 0.00000000,  3.91300000,  0.00000000],
   [ 0.00000000,  0.00000000,  3.91300000]]
basis_atoms:
  Sr: 
      [[ 0.50000000,  0.50000000,  0.50000000]]
  Ti: 
      [[ 0.00000000,  0.00000000,  0.00000000]]
  O: 
      [[ 0.50000000,  0.00000000,  0.00000000],
       [ 0.00000000,  0.50000000,  0.00000000],
       [ 0.00000000,  0.00000000,  0.50000000]]

4.4 - Symmetrizing atoms in crytal

A key task is to symmetrize the atoms such that they transform like irreducible representations.

Here we illustrate how to symmtrize displacements at a given q-point according to irreducible representations of the little group using the script pm-disp-qrep. Take the example of \(\bm{q}=(\frac{1}{2},\frac{1}{2},\frac{1}{2})\) in cubic SrTiO\(_3\). We will use pm-prototype-xtal to generate the crystal structure file.

$ pm-prototype-xtal --perovskite > xtal.yml

Printing out the irreducible representations, we have

$ pm-orbit-qrep -i xtal.yml --point-group Oh --qpoint 1/2,1/2,1/2 --print-irr-reps 

Orbit Key: ('Sr', 0), Atom Index: [0]
A2u
Orbit Key: ('Ti', 0), Atom Index: [0]
A1g
Orbit Key: ('O', 0), Atom Index: [0, 1, 2]
T1u

We can also print out the irreducible representation vectors.

$ pm-orbit-qrep -i xtal.yml --point-group Oh --qpoint 1/2,1/2,1/2 --print-irr-vectors

irr-vectors
Orbit Key: ('Sr', 0), Atom Index: [0]
A2u
('A2u', 0)
[[1.]]

Orbit Key: ('Ti', 0), Atom Index: [0]
A1g
('A1g', 0)
[[1.]]

Orbit Key: ('O', 0), Atom Index: [0, 1, 2]
T1u
('T1u', 0)
[[1. 0. 0.]
 [0. 1. 0.]
 [0. 0. 1.]]

4.5 - Symmetrizing displacements in Crystal

A key task is to symmetrize the basis such that the vectors transform like irreducible representations.

Here we illustrate how to symmtrize displacements at a given q-point according to irreducible representations of the little group using the script pm-disp-qrep. Take the example of \(\bm{q}=(\frac{1}{2},\frac{1}{2},\frac{1}{2})\) in cubic SrTiO\(_3\). We will use pm-prototype-xtal to generate the crystal structure file.

$ pm-prototype-xtal --perovskite > xtal.yml

Printing out the irreducible representations, we have

$ pm-disp-qrep -i xtal.yml --point-group Oh --qpoint 1/2,1/2,1/2  --print-irr-reps
('Sr', 0) [0]
T2g
('Ti', 0) [1]
T1u
('O', 0) [2 3 4]
A1g+Eg+T1g+T2g

We can also print out the irreducible representation vectors.

$ pm-disp-qrep -i xtal.yml --point-group Oh --qpoint 1/2,1/2,1/2 --print-irr-vectors
T2g[Sr]
 0.0000  0.0000  1.0000
 1.0000  0.0000  0.0000
 0.0000  1.0000  0.0000
T1u[Ti]
 1.0000  0.0000  0.0000
 0.0000  1.0000  0.0000
 0.0000  0.0000  1.0000
A1g[O]
 0.5774  0.0000  0.0000  0.0000  0.5774  0.0000  0.0000  0.0000  0.5774
Eg[O]
 0.7071  0.0000  0.0000  0.0000 -0.7071  0.0000  0.0000  0.0000  0.0000
-0.4082  0.0000  0.0000  0.0000 -0.4082  0.0000  0.0000  0.0000  0.8165
T1g[O]
 0.0000  0.0000  0.0000  0.0000  0.0000  0.7071  0.0000 -0.7071  0.0000
 0.0000  0.0000 -0.7071  0.0000  0.0000  0.0000  0.7071  0.0000  0.0000
 0.0000  0.7071  0.0000 -0.7071  0.0000  0.0000  0.0000  0.0000  0.0000
T2g[O]
 0.0000  0.7071  0.0000  0.7071  0.0000  0.0000  0.0000  0.0000  0.0000
 0.0000  0.0000  0.0000  0.0000  0.0000  0.7071  0.0000  0.7071  0.0000
 0.0000  0.0000  0.7071  0.0000  0.0000  0.0000  0.7071  0.0000  0.0000

This script will block diagonalize a dynamical matrix which has been provided. Consider the dyanmical matrix at the gamma point for STO:

$ cat dmat.yaml
dynamical_matrix_real: 
      3.6228  0       0      -2.990   0       0       0.2751  0       0       0.2751  0       0      -1.1822  0       0     
      0       3.6228  0       0      -2.990   0       0      -1.1822  0       0       0.2751  0       0       0.2751  0     
      0       0       3.6228  0       0      -2.990   0       0       0.2751  0       0      -1.1822  0       0       0.2751
     -2.990   0       0       2.060   0       0       0.2442  0       0       0.2442  0       0       0.4416  0       0     
      0      -2.990   0       0       2.060   0       0       0.4416  0       0       0.2442  0       0       0.2442  0     
      0       0      -2.990   0       0       2.060   0       0       0.2442  0       0       0.4416  0       0       0.2442
      0.2751  0       0       0.2442  0       0       4.0322  0       0       0.90    0       0      -5.460   0       0     
      0      -1.1822  0       0       0.4416  0       0      11.6616  0       0      -5.460   0       0      -5.460   0     
      0       0       0.2751  0       0       0.2442  0       0       4.0322  0       0      -5.460   0       0       0.90  
      0.2751  0       0       0.2442  0       0       0.90    0       0       4.0322  0       0      -5.460   0       0     
      0       0.2751  0       0       0.2442  0       0      -5.460   0       0       4.0322  0       0       0.90    0     
      0       0      -1.1822  0       0       0.4416  0       0      -5.460   0       0      11.6616  0       0      -5.4605
     -1.1822  0       0       0.4416  0       0      -5.460   0       0      -5.460   0       0      11.6616  0       0     
      0       0.2751  0       0       0.2442  0       0      -5.460   0       0       0.90    0       0       4.0322  0     
      0       0       0.2751  0       0       0.2442  0       0       0.90    0       0      -5.460   0       0       4.0322

We can then block diagonalize as follows:

pm-prototype-xtal --perovskite2 | pm-disp-qrep --point-group Oh --qpoint 0,0,0 --dynamical-matrix ~/dmat.yaml
T1u-block : [('T1u', 0), ('T1u', 1), ('T1u', 2), ('T1u', 3)]
Dynamical matrix sub-block - first row
[[ 0.    0.    0.    0.  ]
 [ 0.    2.57  3.2  -0.18]
 [ 0.    3.2   3.01 -1.33]
 [ 0.   -0.18 -1.33 16.7 ]]
Mass matrix sub-block - first row
[[ 36.7    5.59 -27.74   0.  ]
 [  5.59  45.08  13.87   0.  ]
 [-27.74  13.87  69.71   0.  ]
 [  0.     0.     0.    16.  ]]
Mass renormalized Eigvecs
[[-0.324  0.931 -0.169 -0.017]
 [ 0.692  0.111 -0.713  0.   ]
 [-0.644 -0.348 -0.68  -0.046]
 [-0.035  0.    -0.034  0.999]]
Unique Frequencies (meV)
[-8.19 -0.58 19.56 66.13]

T2u-block : [('T2u', 0)]
Dynamical matrix sub-block - first row
[[3.13]]
Mass matrix sub-block - first row
[[16.]]
Mass renormalized Eigvecs
[[1.]]
Unique Frequencies (meV)
[28.61]

It may be desirable to perform a unitary transformation within some irreducible representation subspace. Consider a yam file called unitary.yaml with the following content:

# unitary transformation to nearly diagonize the T1u subspace
- block: ['Optical',0] 
  irr_rep: 'T1u'
  instances: [0,1]
  U: 
    [[ -0.73059222, 0.68281403 ],
     [  0.68281403, 0.73059222 ]]

The conventional output is:

$ pm-prototype-xtal --perovskite2 | pm-disp-qrep --point-group Oh --qpoint 0,0,0  --print-vectors

('Acoustic', 0) [0, 1, 2, 3, 4]
T1u
('T1u', 0)
[[0.447 0.    0.    0.447 0.    0.    0.447 0.    0.    0.447 0.    0.    0.447 0.    0.   ]
 [0.    0.447 0.    0.    0.447 0.    0.    0.447 0.    0.    0.447 0.    0.    0.447 0.   ]
 [0.    0.    0.447 0.    0.    0.447 0.    0.    0.447 0.    0.    0.447 0.    0.    0.447]]
('Optical', 0) [0, 1, 2, 3, 4]
2T1u
('T1u', 0)
[[-0.224  0.     0.     0.894  0.     0.    -0.224  0.     0.    -0.224  0.     0.    -0.224  0.     0.   ]
 [ 0.    -0.224  0.     0.     0.894  0.     0.    -0.224  0.     0.    -0.224  0.     0.    -0.224  0.   ]
 [ 0.     0.    -0.224  0.     0.     0.894  0.     0.    -0.224  0.     0.    -0.224  0.     0.    -0.224]]
('T1u', 1)
[[-0.866  0.     0.     0.     0.     0.     0.289  0.     0.     0.289  0.     0.     0.289  0.     0.   ]
 [ 0.    -0.866  0.     0.     0.     0.     0.     0.289  0.     0.     0.289  0.     0.     0.289  0.   ]
 [ 0.     0.    -0.866  0.     0.     0.     0.     0.     0.289  0.     0.     0.289  0.     0.     0.289]]
('O', 0) [2 3 4]
T1u+T2u
('T1u', 0)
[[ 0.408  0.     0.     0.408  0.     0.    -0.816  0.     0.   ]
 [ 0.    -0.816  0.     0.     0.408  0.     0.     0.408  0.   ]
 [ 0.     0.     0.408  0.     0.    -0.816  0.     0.     0.408]]
('T2u', 0)
[[ 0.     0.     0.707  0.     0.     0.     0.     0.    -0.707]
 [-0.707  0.     0.     0.707  0.     0.     0.     0.     0.   ]
 [ 0.     0.     0.     0.    -0.707  0.     0.     0.707  0.   ]]

Now we can print the relevant output after the unitary transformation:

$ pm-prototype-xtal --perovskite2 | pm-disp-qrep --point-group Oh --qpoint 0,0,0  --unitary-transformation ~/unitary.yaml --print-vectors

('Optical', 0) [0, 1, 2, 3, 4]
2T1u
('T1u', 0)
[[-0.428  0.     0.    -0.653  0.     0.     0.36   0.     0.     0.36   0.     0.     0.36   0.     0.   ]
 [ 0.    -0.428  0.     0.    -0.653  0.     0.     0.36   0.     0.     0.36   0.     0.     0.36   0.   ]
 [ 0.     0.    -0.428  0.     0.    -0.653  0.     0.     0.36   0.     0.     0.36   0.     0.     0.36 ]]
('T1u', 1)
[[-0.785  0.     0.     0.611  0.     0.     0.058  0.     0.     0.058  0.     0.     0.058  0.     0.   ]
 [ 0.    -0.785  0.     0.     0.611  0.     0.     0.058  0.     0.     0.058  0.     0.     0.058  0.   ]
 [ 0.     0.    -0.785  0.     0.     0.611  0.     0.     0.058  0.     0.     0.058  0.     0.     0.058]]

Given some distorted crystal structure, one can project the displacement onto symmetrized displacements. Consider the primitive unit cell of STO where the ferroelectric mode is allowed to condense by treating the nuclear motion as classical:

$ cat xtal_ferro.yaml

# Crystal
vec:
  [[ 3.89736730,  0.00092044,  0.00092044],
   [ 0.00092044,  3.89736730,  0.00092044],
   [ 0.00092044,  0.00092044,  3.89736730]]
atoms:
  - Sr:
      [[ 0.02422827,  0.02422827,  0.02422827]]
  - Ti:
      [[ 0.52563170,  0.52563170,  0.52563170]]
  - O:
      [[ 0.51647453,  0.01719097,  0.51647453],
       [ 0.51647453,  0.51647453,  0.01719097],
       [ 0.01719097,  0.51647453,  0.51647453]]

Projecting onto irreducible representations gives:

pm-prototype-xtal --perovskite2 | pm-disp-qrep --point-group Oh --qpoint 0,0,0  --unitary-transformation unitary.yaml  --project-xtal xtal_ferro.yaml
Projection onto irreducible representations
Block : Acoustic
----------------
T1u     0.1744  0.1744  0.1744

Block : Optical
---------------
T1u     -0.0353 -0.0353 -0.0353
1.T1u   -0.0018 -0.0018 -0.0018

Block : O
---------
T1u     -0.0023 -0.0023 -0.0023
T2u     0       0       0

4.6 - Products of Irreducible Representations

Perform direct products and symmetric direct products of irreps.

A common group theoretical task is the direct product or symmetric direct product of irreducible representations. The composition of a product is obtained as

$ pm-irr_rep_product --point-group Oh --irr-reps T2g,T2g

# T2gxT2g  = A1g+Eg+T1g+T2g 

The composition of a symmetric product is obtained as

$ pm-irr_rep_product --point-group Oh --irr-reps T2g,T2g --symmetric 

# [T2gxT2g] = A1g+Eg+T2g

An arbitrary number of irreps can be handled

$ pm-irr_rep_product --point-group Oh --irr-reps T2g,T2g,Eg,Eg,A2g

# T2gxT2gxEgxEgxA2g  = 2A1g+2A2g+4Eg+4T1g+4T2g

Another important task is constructing the irreducible representation vectors of the product representation in terms of the rows of the original irreducible representations.

$ pm-irr_rep_product --point-group Oh --irr-reps Eg,Eg  --print-irr-vectors  

#  EgxEg  = A1g+A2g+Eg
# 
# irr   Eg.0xEg.0  Eg.0xEg.1  Eg.1xEg.0  Eg.1xEg.1  
# A1g   0.7071     0.0        0.0        0.7071     
# A2g   0.0        0.7071     -0.7071    0.0        
# Eg.0  0.0        0.7071     0.7071     0.0        
# Eg.1  0.7071     0.0        0.0        -0.7071 

The same can be done for the symmetric product

$ pm-irr_rep_product --point-group Oh --irr-reps Eg,Eg  --print-irr-vectors  --symmetric 

# [EgxEg] = A1g+Eg
#
# irr   Eg.0xEg.0  Eg.0xEg.1  Eg.1xEg.1  
# A1g   0.7071     0.0        0.7071     
# Eg.0  0.0        1.0        0.0        
# Eg.1  0.7071     0.0        -0.7071

It might also be useful to print out the transpose of the results, which can be done with the print-transpose flag:

$ pm-irr_rep_product --point-group Oh --irr-reps Eg,Eg  --print-irr-vectors  --symmetric --print-transpose 

# [EgxEg] = A1g+Eg
#
#           A1g     Eg.0    Eg.1    
# Eg.0xEg.0  0.7071  0.0000  0.7071 
# Eg.0xEg.1  0.0000  1.0000  0.0000 
# Eg.1xEg.1  0.7071  0.0000 -0.7071 

In many cases, one is only concerned with the identity representation, and the flag only-print-identity will only print identity representations and will prune out any basis vectors that are not contained in the product:

$ pm-irr_rep_product --point-group Oh --irr-reps Eg,Eg  --print-irr-vectors  --symmetric --only-print-identity 

# [EgxEg] = A1g+Eg
# 
#     Eg.0xEg.0 Eg.1xEg.1 
# A1g  0.7071    0.7071   

4.7 - pm-convert-id-main-to-dev

The main branch has a different set of phase conventions than the development branch, and the irreducible derivatives must be converted in order to use them on the development branch.

In order to extract the needed information from the main branch, checkout a branch that has the relevant scripts to dump the output.

# checout the relevant offshoot of main branch from github remote
$ git checkout dev/output_scripts

# from the top level, reinstall to access needed cli
$ pip install -e .

Locate the irreducible derivative file that needs to be converted, and then run the following script.

# on branch dev/output_scripts
$ pm-convert-id-hdf5-to-yaml irreducible_derivatives.hdf5

# output is --> irr_deriv_and_basis_main.hdf5

Returning to the development branch (dev/only_phonons), the generated file irr_deriv_and_basis_main.hdf5 is read by a script which will rotate the irreducible derivatives and output them in the relevant format.

# on branch dev/only_phonons
$ pm-convert-id-main-to-dev --hdf5-name irr_deriv_and_basis_main.hdf5 

This script prints to standard out a yaml file containing all tags needed to run pm-phonons-via-irr-derivatives.

4.8 - pm-phonons-via-irr-derivatives

The phonon frequencies are computed from the irreducible derivatives. Various different levels of output are provided.

The following tags are required:

  1. point_group
  2. lattice_vectors
  3. basis_atoms
  4. quadratic_irr_derivatives

If there are complex irreducible derivatives, the imaginary part must be provided by the tag quadratic_irr_derivatives_imaginary. The code will print out a summary of the irreducible derivatives with varying levels of verbosity.

$ pm-phonons-via-irr-derivatives -i input.yaml --print-summary --verbosity long

There are several levels of verbosity that can be seen via tab completion.

5 - Modules

Documenting key classes and functions in each module.

Here we document various classes and functions within each module, and provide minimal examples.

5.1 - Command line interface

Overview of command line scripts in PM for developers.

All command line script for PM are stored in the cli module, end with the suffix _cli, and are a class named CommandLine which derives from BaseCommandLine. In order to register a new cli, the script must be added to the setup.cfg file. Consider the example irr_rep_prod_cli.py: there is an entry in the console_scripts

# setup.cfg file
console_scripts =
  pm-irr_rep_product =   principia_materia.cli.irr_rep_product_cli:CommandLine.main

After updating the file, you must run pip installi . or pip install -e . to make the changes take effect. You also may need to restart your virtual environment for tab completion to be updated.

5.2 - Translation Group

The lattice is a central component of a crystal, and PM has a class for the infinite translation group (Lattice) and the finite translation group (LatticeFTG).

When performing calculations, we need to use various properties of both the infinite translation group (Lattice) and the finite translation group (LatticeFTG). The infinite translation group contains the lattice vectors, and computes various information from them (e.g. vector lengths and angles, volume, reciprocal lattice, etc). The FTG is naturally characterized by a supercell \(\hat{S}_{BZ}\), and LatticeFTG derives from Lattice. Finding all translation points within a given supercell is a key task. Similarly, another key task is finding all the k-points commensurate with a given FTG, but this is currently execute in the separate kpoints class.

Additionally, we will need to find the smallest supercell that will accomodate a given set of \(\bm q\)-points.

Lattice

Here introduce the basic functionality of the Lattice class using the face center cubic as a demonstration:

>>> import numpy as np
>>> from principia_materia.translation_group import Lattice
>>> fcc_vec = 0.5*(np.ones((3,3))-np.identity(3))
>>> lattice = Lattice(vec=fcc_vec)
>>> lattice.vec
array([[0. , 0.5, 0.5],
       [0.5, 0. , 0.5],
       [0.5, 0.5, 0. ]])

With the above Lattice class object, we can compute various properties of the lattice.

>>> print("volume:", lattice.vol)
volume: 0.25
>>> print("lengths of the lattice vectors:", lattice.abc)
lengths of the lattice vectors: [0.70710678 0.70710678 0.70710678]
>>> print("angles between the lattice vectors:", lattice.abg)
angles between the lattice vectors: [60. 60. 60.]

There is a reciprocal lattice associated with the lattice, which is also constructed in this class, and we follow the convention \(\bm a_i\cdot \bm b_j =2\pi \delta_{ij} \), where \(\bm a_i\) is a real space vector and \(\bm b_j\) is a reciprocal space vector.

>>> print(lattice.rvec)
[[-6.28318531  6.28318531  6.28318531]  
 [ 6.28318531 -6.28318531  6.28318531]  
 [ 6.28318531  6.28318531 -6.28318531]] 
>>> print("volume of reciprocal lattice:", lattice.rvol)
volume of reciprocal lattice: 992.200854
>>> print("lengths of the reciprocal lattice vectors:", lattice.rabc)
lengths of the reciprocal lattice vectors: [10.88279619 10.88279619 10.88279619]
>>> print("angles between the reciprocal lattice vectors:", lattice.rabg)
angles between the reciprocal lattice vectors: [109.47122063 109.47122063 109.47122063]

Additionally, the Lattice class allows us to perform various operation to a lattice, for example applying an axial strain:

>>> strain = [0.02, 0, 0] # axial strain along x
>>> example_lattice = lattice.copy()
>>> example_lattice.axial_strain(strain)
>>> print(example_lattice.vec)
[[0.   0.5  0.5 ] 
 [0.51 0.   0.5 ] 
 [0.51 0.5  0.  ]]

LatticeFTG

LatticeFTG is the class which handles finite translation groups, and the key method of the class finds all tranlsation points (i.e. t-points) contained within the supercell. While this result can be obtained from a simple Cartesian product in the case of a diagonal supercell, non-diagonal supercells are more subtle. Non-diagonal supercells are important in the context of lattice dynamic, because it is always preferrable to work with the smallest number of irreducible derivatives which yield a smooth interpolation, and non-diagonal supercells yield multiplicities which cannot be obtained solely using diagonal supercells. Therefore, it is important to have a robust algorithm to find all lattice points within an arbitrary supercell, and our algorithm is detailed in Appendix D in PRB 100 014303. The same algorithm can be reversed to compute the indices of the lattice points at \(\mathcal{O}(1)\) time complexity, removing the need to create a hash function.

Below is a simple example which is easily recognizable: a fcc lattice with a supercell that yields the conventional cubic cell.

>>> import numpy as np
>>> from principia_materia.translation_group import LatticeFTG
>>> fcc_vec = 0.5*(np.ones((3,3))-np.identity(3))
>>> S_bz = np.ones((3, 3)) - 2 * np.identity(3)
>>> lattice_ftg = LatticeFTG(fcc_vec,S_bz)
>>> lattice_ftg.vec
array([[0. , 0.5, 0.5], 
       [0.5, 0. , 0.5], 
       [0.5, 0.5, 0. ]])
>>> lattice_ftg.supa
array([[-1,  1,  1], 
       [ 1, -1,  1], 
       [ 1,  1, -1]]) 
>>> lattice_ftg.tpoints
array([[0, 0, 0], 
       [0, 0, 1], 
       [0, 1, 0], 
       [1, 0, 0]])
>>> lattice_ftg.get_index(lattice_ftg.tpoints)
array([0, 1, 2, 3])
>>> lattice_ftg.supa_vec
array([[1., 0., 0.], 
       [0., 1., 0.], 
       [0., 0., 1.]])
>>> lattice_ftg.supa_tpoints
array([[ 0 ,  0 ,  0 ],
       [1/2, 1/2,  0 ],
       [1/2,  0 , 1/2],
       [ 0 , 1/2, 1/2]], dtype=object)

KpointsFTG

We have a dedicated class for construcint k-points, though this is basically just a trivial wrapper around LatticeFTG.

>>> import numpy as np
>>> from principia_materia.translation_group import KpointsFTG
>>> fcc_vec = 0.5*(np.ones((3,3))-np.identity(3))
>>> S_bz = np.ones((3, 3)) - 2 * np.identity(3)
>>> qpts = KpointsFTG(vec=fcc_vec, supa=S_bz)
>>> qpts.ppoints
array([[0, 0, 0],
       [0, 0, 1],
       [0, 1, 0],
       [1, 0, 0]])
>>> qpts.kpoints
array([[ 0 ,  0 ,  0 ],
       [1/2, 1/2,  0 ],
       [1/2,  0 , 1/2],
       [ 0 , 1/2, 1/2]], dtype=object)
>>> qpts.ppoints_to_kpoints(
...      np.array([[0, 0, 1],
...                [1, 0, 0]])  )
array([[1/2, 1/2,  0 ],
       [ 0 , 1/2, 1/2]], dtype=object)
>>> qpts.kpoints_to_ppoints(
...      np.array([[0.5, 0.5, 0  ],
...                [0  , 0.5, 0.5]])  )
array([[0, 0, 1],
       [1, 0, 0]])

KpointsFSG

Similar to KpointsFTG, the class KpointsFSG is basically a trivial wrapper around LatticeFSG, which handles the point symmetry of the lattice. Given that the most common use cases of point symmetry are in reciprocal space, only KpointsFSG is outlined here. KpointsFSG derives from KpointsFTG, so it shares all the same attributes outlined above.

>>> import numpy as np
>>> from principia_materia.translation_group import KpointsFSG
>>> fcc_vec = 0.5*(np.ones((3,3))-np.identity(3))
>>> S_bz = 2*np.identity(3)
>>> qpts = KpointsFSG(vec=fcc_vec, supa=S_bz, pg='Oh')
>>> qpts.kpoints
array([[ 0 ,  0 ,  0 ],
       [1/2,  0 ,  0 ],
       [ 0 , 1/2,  0 ],
       [1/2, 1/2,  0 ],
       [ 0 ,  0 , 1/2],
       [1/2,  0 , 1/2],
       [ 0 , 1/2, 1/2],
       [1/2, 1/2, 1/2]], dtype=object)
>>> qpts.irr_kpoints
array([[ 0 ,  0 ,  0 ],
       [1/2,  0 ,  0 ],
       [1/2, 1/2,  0 ]], dtype=object)
>>> qpts.irr_ppoints
array([[0, 0, 0],
       [1, 0, 0],
       [1, 1, 0]])
>>> qpts.N_irr
3
>>> qpts.irr_indexes
array([0, 1, 3])
>>> qpts.indexes_map_irr_index
array([0, 1, 1, 3, 1, 3, 3, 1])
>>> qpts.indexes_map_irr_order
array([0, 1, 1, 2, 1, 2, 2, 1])
>>> qpts.stars
[[0], [1, 2, 4, 7], [3, 5, 6]]
>>> qpts.little_groups
['Oh', 'D3d', 'D4h']
>>> qpts.kpoint_type
['real', 'real', 'real']
>>> # operations that yield 4th point from irr counterpart
>>> qpts.indexes_map_irr_ops[4]
['c3be', 'ci3de', 'c2y', 'c4x', 'c4z', 'c2d', 'Ic3be', 'Ici3de', 'Ic2y', 'Ic4x', 'Ic4z', 'Ic2d']

QpointsN

Next, we can use the above information to find irreducible \(Q\)-points.

>>> qpointsn = QpointsN(vec=lattice_vectors, supa=supa, order=3, pg=pg)
>>> qpointsn.find_irreducible_Qpoints()
array([[[Fraction(0, 1), Fraction(0, 1), Fraction(0, 1)],                
        [Fraction(0, 1), Fraction(0, 1), Fraction(0, 1)],                
        [Fraction(0, 1), Fraction(0, 1), Fraction(0, 1)]],               
        ...,                                                             
       [[Fraction(3, 4), Fraction(1, 4), Fraction(1, 2)],                
        [Fraction(3, 4), Fraction(1, 4), Fraction(1, 2)],                
        [Fraction(1, 2), Fraction(1, 2), Fraction(0, 1)]]], dtype=object)

get_minimum_supercell

Algorithm to find the minimum supercell for a given list of \(\textbf{q}\)-points

The minimum supercell multiplicity is derived in the paper. Here using the implemented function, we find that for the \(\textbf{q}\)-point \(\textbf{q}=\left(\frac{1}{4}, \frac{3}{4}, 0\right)\) the supercell of the minimum multiplicity is:

$$ \hat{S}_{\textbf{q}} = \begin{bmatrix} 1 & 1 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 1 \end{bmatrix} $$

, and for the \(Q\)-point \(Q=\left[\left(\frac{1}{4}, \frac{3}{4}, 0\right), \left(\frac{1}{4}, \frac{1}{2}, 0\right), \left(\frac{3}{4}, \frac{1}{4}, \frac{1}{2}\right)\right]\) the supercell of the minimum multiplicity is:

$$ \hat{S}_{Q} = \begin{bmatrix} 4 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 2 \end{bmatrix} $$

>>> import numpy as np
>>> from principia_materia.translation_group import get_minimum_supercell
>>> from principia_materia.io_interface import parse_array
>>> from principia_materia.mathematics import Fraction

>>> Qpoint = parse_array("""\
...     1/4 3/4 0
... """, dtype=Fraction)
>>> Qpoint
array([1/4, 3/4,  0 ], dtype=object)
>>> get_minimum_supercell(Qpoint)
array([[1, 1, 0], 
       [0, 4, 0], 
       [0, 0, 1]])
>>> Qpoint = parse_array("""\
...     1/4 3/4 0
...     1/4 1/2 0
...     3/4 1/4 1/2
... """, dtype=Fraction)
>>> Qpoint
array([[1/4, 3/4,  0 ],
       [1/4, 1/2,  0 ],
       [3/4, 1/4, 1/2]], dtype=object)
>>> get_minimum_supercell(Qpoint)
array([[4, 0, 0], 
       [0, 4, 0], 
       [0, 0, 2]])

5.3 - Symmetry

The symmetry module contains code related to point and space group symmetry.

Here we encode all information about the crystallographic point groups. In the near future, we will add space group information here as well.

Cornwell_group_matrices

Cornwell_point_group

PointGroup

get_little_group

5.4 - Representation

Build representation of group and decompose into irreducible representations.

This module contains all classes relevant to building a representation of a group, finding the composition in terms of irreducible representations, and building irreducible vectors.

BaseRepresentation

CharmBlochRep

ShiftMode

ClusterRep

DispClusterRep

SingleTensorRep

DirectProduct

SymmetricDirectProduct

StarCharmBlochRep and Stars

ProductofQIrreps

SymmetricProductofQIrreps

QStarRep

SymmetricQStarRep

5.5 - Finite Difference

This module sets up arbitrary order finite difference calculations for an arbitrary number of variables.

Finite difference is used to compute phonons and phonon interactions in PM, and this module isolates all the tasks associated with performing finite difference. Here we give some examples to illustrate the key features.

Let us consider a potential with three variables, labeled 0, 1, 2, and set up all derivatives to second order. Begin by instantiating the class:

>>> from principia_materia.finite_difference.finite_difference_np import FiniteDifferenceNP
>>> fd=FiniteDifferenceNP()

Set all the derivatives to be evaluated:

>>> fd.set_deriv(varset=[[0],[1],[2],[0,1],[0,2],[1,2]],
...              ordset=[[2],[2],[2],[1,1],[1,1],[1,1]])

Set the deltas to be the same for all variables and setup the finite difference amplitudes:

>>> fd.set_delta_global([0.01,0.02])
>>> fd.setup_amplitude()

We now have all the information we need to perform finite difference. For example, all amplitudes associated with a given set of variables is retrieved as:

>>> print (fd.get_var_amplitude([0,2]))
 
[[-0.02 -0.02]
 [-0.02  0.02]
 [-0.01 -0.01]
 [-0.01  0.01]
 [ 0.01 -0.01]
 [ 0.01  0.01]
 [ 0.02 -0.02]
 [ 0.02  0.02]]

In more complicated scenarios, the same amplitudes would appear multiple times, but this class accounts for that when making a final list of amplitudes. Take the following example containing second and fourth derivatives:

>>> fd.set_deriv(varset=[[0,1],[0,1]],
...              ordset=[[1,1],[2,2]])

Inspecting the list of amplitudes for variable [0,1], we see:

>>> print (fd.get_var_amplitude([0,1]))
 
[[-0.02 -0.02]
 [-0.02  0.02]
 [-0.01 -0.01]
 [-0.01  0.01]
 [ 0.01 -0.01]
 [ 0.01  0.01]
 [ 0.02 -0.02]
 [ 0.02  0.02]
 [-0.04 -0.04]
 [-0.04  0.04]
 [ 0.04 -0.04]
 [ 0.04  0.04]]

The same amplitudes are used for both the second and fourth derivatives, and the class is aware of this. Additionally, the fourth derivatives will need cases where one amplitude is zero, which do not appear. In this case, the class creates a new variable sets [0] and [1] that contain these amplitudes. For example,

>>> print (fd.get_var_amplitude([0]))

[[-0.04]
 [-0.02]
 [ 0.02]
 [ 0.04]]

6 - Data Management

How PM handles data that it intakes and generates.

PM deals with varied data sets, some of which is read in from other sources and some of which is generated.

6.1 - Data Aggregator

Data Aggregator is a schema for defining input data, documenting the data, and characterizing relationships between input data.

Like many python-based application, PM will need to take input data from the command line and or YAML input files. A mechanism is needed to characterize the data, enforce the data type of the input data, and possibly do some preprocessing of the data. Data Aggregator (DA) handles these tasks. DA uses descriptors to encode the properties of a collection of data.

Descriptors

Input data is characterized by descriptors. A descriptor describes a set of data associated with some entity, which is normally a class or a script. The descriptor will normally be named by the class or command line interface they correspond to. Below we document the general structure of a descriptor named descriptor_name.

descriptor_name:
  type: generic  # allowed values are generic, class, or script
  description: 
    A description of the collection of variables. If argparse 
    is used, this will be passed on.
  inherit:
    - descriptor1 # inherit properties of descriptor1
    - descriptor2 # inherit properties of descriptor1
  conflicts: 
    - [var1, var2]        # var1 and var2 cannot both be specified
    - [var1, var3, var4]  # var1, var3, var4 cannot all be specified
  variables:  
    var1:
      type: int           # a full list of data type provided elsewhere
      optional: True      # var1 is not required; default is False
      dimension: scalar   # this is only for PM documentation 
      code_alias: v1      # sometimes an alias is used within code
      alias:
        - variable1       # multiple alias can be given for argparse
      choices:            # var1 can only have values of 1 or 2 
        - 1
        - 2
      requires: 
        - var5      # if var1 is provided, var5 must be given too
      default: 5.2  # a default value may be provided
      help: 
        A description of the variable.
      

Using DA to parse and enforce data types

One degree of functionality for DA is to apply data types to input data. Consider a minimal example of a simple script, where one might have some data in a yaml file that is read in:

import yaml
from principia_materia.io_interface.data_aggregator import DataAggregator

descriptor = yaml.safe_load("""
script_input:
  description:
    An example of a descriptor.
  variables:
    lattice_vectors:
      type: float-array
    scale:
      type: float
      help: a factor used to rescale the lattice vectors.
""")

data = yaml.safe_load("""
lattice_vectors: |
  0 1 1 
  1 0 1
  1 1 0
scale: 4.8
""")

data_ag = DataAggregator(
                'script_input',
                descriptor_definition = descriptor,
                input_data = data)

print (data_ag.data)
# {'lattice_vectors': array([[0., 1., 1.],
#        [1., 0., 1.],
#        [1., 1., 0.]]), 'scale': 4.8}

print (data_ag.data_tuple)
# script_input(lattice_vectors=array([[0., 1., 1.],
#        [1., 0., 1.],
#        [1., 1., 0.]]), scale=4.8)

DA uses the parse_array function to parse strings into arrays, which is documented elsewhere.

Using DA with argparse

DA descriptors contain all the information needed to construct an argparse instance which can be used to get information from the command line. We continue the previous example, but now instead of obtaining data by definition, we will get all variables from the command line. The call to DataAggregator will now not provide any data, but instead provides instructions to use argparse, which is the default behavior.

data_ag = DataAggregator(
                'script_input',
                descriptor_definition = descriptor,
                use_argparse=True)

The default of use_argparse is True, so this variable only needs to be specified if one does not want the overhead of instantiating argparse. We can then call the script, called test_script.py, from the command line:

$ test_script.py --lattice-vectors '0 1 1 , 1 0 1 , 1 1 0' --scale 4.8 

It should be emphasized that one can both provide input data and use argparse, and by default argparse takes presidence; allowing one to easily override variables from the command line.

All of the data from the descriptor has been loaded into argparse, and issuing the help flag -h or --help will generate the following:

$ test_script.py -h
# usage: test.py [-h] [--lattice-vectors LATTICE_VECTORS] [--scale SCALE]
# 
# An example of a descriptor.
# 
# options:
#   -h, --help            show this help message and exit
#   --lattice-vectors LATTICE_VECTORS
#   --scale SCALE         a factor used to rescale the lattice vectors.

Using DA with input files

We have now considered directly taking input from yaml string and taking input from the command line via argparse. Given that a common use case will be to take yaml input from some yaml file, DA allows one to specify a variable name which will take in a list of strings which are names of yaml files which are meant to be processed. Consider taking our input and breaking it into two yaml files:

# file1.yaml 
lattice_vectors: |
  0 1 1 
  1 0 1
  1 1 0
# file2.yaml
scale: 4.8

Now we can tell DA that there will be a variable that will contain input file names using the input_files_tag='input_files' variable. Additionally, we will need to add this variable to our descriptor:

descriptor['script_input']['variables']['input_files']={
  'optional':True,
  'type': 'string-list'
  }

data_ag = DataAggregator(
                'script_input',
                descriptor_definition = descriptor,
                input_files_tag = 'input_files'
                use_argparse = True)

We can then call the script:

$ test_script.py --inputs-files file1.yaml,file2.yaml

By default, we have input_files_tag = 'input_files' and use_argparse = True, so these variables do not need to be specified.

Using DA with a required input file

In addition to specifying input files as described above, it is possible to also specify a required input file, which demands that an input file must be given on the command line or piped into standard input. The is specified in the constructor using require_file=True. This would be desirable if there is a script that requires enough data input that an input file will always be used, allowing the user to specify the file name but not use any flags. For example, we could have

# file1.yaml 
lattice_vectors: |
  0 1 1 
  1 0 1
  1 1 0
scale: 4.8

and then any of the three commands could be used:

$ test_script.py   file1.yaml
$ test_script.py < file1.yaml
$ cat file1.yaml | test_script.py

Setting data heirarchy in DA

It should be emphasized that one can simultaneously specify variables through the constructor, the command line, input files, and a required input file, allowing data to be taken from four different sources. The precedent for each source of data is specified by the hierarchy=[0,1,2,3] variable in the constructor, where the number given in the list corresponds to the following data sources:

data_sources = [ input_file_data,    # data from input yaml files
                 required_file_data, # data from a required file
                 args_data,          # data from command line
                 input_data ]        # data supplied through constructor

The hierarchy variable starts with lowest priority and ends with highest priority, which means that the default behavior is that regular input files have lowest priority and data given in the constructor has the highest priority.

Pre-defined descriptors

DA will look for a file called descriptors.yaml in the same directory where data_aggregator.py is located. Descriptors have been created for many classes and command line interfaces in PM.

Instantiating classes using DA

DA can be used to instantiate a class using the aggregated data. Continuing with the above example, we could instantiate the Lattice class, where the constructor requires lattice_vectors.

from principia_materia.translation_group.lattice import Lattice
lattice = data_ag.get_instance(Lattice)

DA will pass all variables that Lattice allows, and withhold the rest.

6.2 - Database

PM uses a sqlite database to store information related to first-principles calculations.

6.3 - hdf5

PM uses hdf files to store irreducible derivatives, force tensors, etc.

7 - Unit Testing

Unit testing PM.

Here we document how unit testing is performed for PM, and provide an overview of what sort of tests are performed. Unit tests for PM are stored in a separate repository called principia-materia-tests.

PM performs unit testing with Pytest, which is a standard framework for unit testing within python. All unit tests may be run by going into the repo, installing the required dependencies, and executing pytest.

git clone git@github.com:marianettigroup/principia-materia-tests.git
cd principia-materia-test
pip install requirements.txt
pytest

An individual test can be run by changing into a directory and running pytest on a specific test. For example,

cd tests/translation_group
pytest test_translation_group.py