|> home > research > materials > crystal|
CRYSTAL performs ab initio calculations of the ground state energy, electronic wave function and properties of periodic systems. The CRYSTAL software was jointly developed by the Theoretical Chemistry Group at the University of Torino and the Computational Materials Science Group at CCLRC Daresbury Laboratory (UK).
The CRYSTAL package performs the computation of the electronic structure using either Hartree-Fock or Density Functional theory. In each case the fundamental approximation made is the expansion of the single particle wave functions as a linear combination of atom centred atomic orbitals (LCAO) based on Gaussian functions. Powerful screening techniques are used to exploit real space locality. The code may be used to perform consistent studies of the physical, electronic and magnetic structure of molecules, polymers, surfaces and crystalline solids. CRYSTAL has been applied to studies of defects in ionic materials, the stability of minerals and oxide surface chemistry.
Only licenced users may access CRYSTAL. If you are not a licenced user and wish to use CRYSTAL please see the main CRYSTAL page for details.
The CRYSTAL executable can be found in
CRYSTAL06 is available under
To access it it is neccessary to send a request to the helpdesk so
that we can verify that you are a licensed user.
CRYSTAL holds a capability rating for excellent scalability at the silver level. This makes CRYSTAL calculations egligible for a discount of 15% when running on 512 processors. Users from projects not yet set up for a CRYSTAL discount should contact the HPCx help desk.
Firstly it is STRONGLY recommended that production jobs on HPCx are direct SCF runs.
The parallel version of the CRYSTAL code installed on the HPCx system may be run in one of two modes:
In this mode none of the large arrays used by CRYSTAL are distributed across the processors. This results in a code that requires very little communication, but is limited in the number of processors to which it can scale. As a rule of thumb the maximum number of processors that should be used is the number of k points in the problem It should also be noted that this mode can perform appreciable amounts of I/O, even in a direct SCF run.
In this mode all the large arrays, such as the Fock matrix in k space, the eigenvectors and the grid used by density functional theory, are distributed across the processors. This code requires many more communications than the replicated version, but is capable of effectively using many more processors. This mode performs very little I/O, One should also note that the distributed data mode only supports a subset of all possible CRYSTAL options.
By default the replicated mode is used. To use the distributed data version you must use the MPP directive in the last section of the input file e.g.:
SILICON BULK: STO-3G CRYSTAL 0 0 0 227 5.42 1 14 .125 .125 .125 END 14 3 1 0 3 2. 0. 1 1 3 8. 0. 1 1 3 4. 0. 99 0 END END 8 4 8 MPP END
( N.B. - This is purely an example ! This job is much too small to be even considered running in parallel except as a test. )
The extra communications incured by the distributed data mode means that one should consider using it only for large cases and, ideally, low symmetry cases. Quite what constitutes a large case will depend somewhat on the physical problem to be investigated, but as a rule of thumb problems with less than 1300 basis functions should not be run in the distributed data mode. For more information see the link to the CRYSTAL benchmarks page at the end of this page.
CRYSTAL reports that a LIMIT (eg: LIM016) needs to be increased.
Though all the large data structures in CRYSTAL are now dynamically allocated some of the smaller ones still use parametrized dimensions. If one of these dimensions is exceeded a message like the above is produced. If this is stopping you run the jobs you wish please contact the help desk so that the code may be recompiled with more appropriate dimensions. If you do this you must include your input file.
CRYSTAL reports that 'MPP DOES NOT SUPPORT' an option
As noted above the distributed data ( a.k.a. MPP ) code does not support all of the possible options in CRYSTAL. The following are not supported, and where possible alternatives are suggested:
These convergance acceleration techniques are yet to be implemented. Instead try increasing the amount of fock matrix mixing ( FMIXING ) coupled with level shifting ( LEVSHIFT ) and/or possibly, for metallic systems, Fermi level smearing ( SMEAR ).
Symmetry adaption of the Fock matrix in K space is not possible with the distributed data version of the code. Note that the problem will still run without these directives, but there may be a performance penalty.
Checking for linear dependence of the basis set by diagonalizing the overlap matrix is not implemented.
Restricted open shell Hartree-Fock runs are not possible with the distributed data code. It may be possible to use unrestricted Hartree-Fock instead, but the different form of the Hamiltonian could produce different answers.
For distributed data DFT runs numerical evaluation of the DFT components of the Kohn-Sham Hamiltonian must be used. Note that this is NOT the default and the NUMERICA keyword MUST be included in the DFT part of the third section of the input. So a very simple example of an MPP input file for a DFT case is:
SILICON STO-2G B3LYP CRYSTAL 0 0 0 227 5.42 1 14 .125 .125 .125 END 14 3 1 0 2 2. 0. 1 1 2 8. 0. 1 1 2 4. 0. 99 0 END DFT NUMERICA B3LYP END END 4 0 4 MPP END
CRYSTAL home page at Daresbury Laboratory.
MPP CRYSTAL benchmarks page
|http://www.hpcx.ac.uk/research/materials/crystal.html||contact email - email@example.com||© UoE HPCX Ltd|