The relevant parameters are listed here.
- auto_convert
-
If this parameter is set equal to ON
and an Abaqus/Explicit analysis is run in parallel with
parallel=domain,
the
convert=select,
convert=state,
and
convert=odb
options will be run automatically at the end of the analysis if required. The default
value is ON
.
- cpus
-
Number of processors to use during an analysis run if parallel processing is available.
The default value for this parameter is 1.
- direct_solver
-
Set this variable equal to DMP
to choose the hybrid
direct sparse solver explicitly. Set this variable equal to
SMP
to choose the pure thread-based direct sparse solver
explicitly. If this parameter is omitted, the default is to use the pure thread-based
direct sparse solver.
- domains
-
The number of parallel domains in Abaqus/Explicit. If the value is greater than 1, the domain decomposition will be performed
regardless of the values of the parallel and
cpus variables. However, if
parallel=DOMAIN
, the
value of cpus must be evenly divisible into the value
of domains. If this parameter is not set, the number
of domains defaults to the number of processors used during the analysis run if
parallel=DOMAIN
or to
1 if parallel=LOOP
.
- gpus
-
Activate direct solver acceleration using GPGPU
hardware in Abaqus/Standard. The value of this parameter should be the number of
GPGPUs to use for an analysis. In an
MPI-based parallel Abaqus/Standard analysis, this is the number of GPGPUs to use on
each host.
- max_cpus
-
Maximum number of processors allowed if parallel processing is available. If this
parameter is not set, the number of processors allowed equals the number of available
processors on the system.
- mp_file_system
-
Type of file system available for an MPI-based
parallel Abaqus analysis. The parameter must be set to a tuple; for example,
mp_file_system=(SHARED,LOCAL)
The first item in the tuple refers to the directory where the job was submitted, while
the second refers to the job's scratch directory. If the file system hosting a directory
is LOCAL
, Abaqus will copy the required analysis files to the remote host machines and, at the end of
the run, copy the output files back. In this case it must be possible to create the
job's directory structure on all the hosts in
mp_host_list. A SHARED
file system means that the host machines share the same file system and file transfer is
not necessary. With the recommended default (DETECT
,
DETECT
) setting, Abaqus will determine the type of file system that exists. An
MPI-based parallel Abaqus/Explicit analysis will use the scratch directory only if a user subroutine is used, whereas
Abaqus/Standard normally writes large temporary files in this directory. Running on a local file
system will generally improve the performance.
- mp_host_list
-
List of host machine names to be used for an MPI-based
parallel Abaqus analysis, including the number of processors to be used on each machine; for example,
mp_host_list=[['maple',1],['pine',1],['oak',2]]
indicates that, if the number of cpus specified for
the analysis is 4, the analysis will use one processor on a machine called
maple
, one processor on a machine called
pine
, and two processors on a machine called
oak
. The total number of processors defined in the host
list has to be greater than or equal to the number of
cpus specified for the analysis. If the host list is
not defined, Abaqus will run on the local system. When using a supported queuing system, this parameter
does not need to be defined. If it is defined, it will get overridden by the queuing
environment.
- mp_mode
-
Set this variable equal to MPI
to indicate that the
MPI components are available on the system. Set
mp_mode=THREADS
to use
the thread-based parallelization method. The default value is
MPI
where applicable.
- mp_mpi_implementation
-
A dictionary or constant to specify the underlying MPI
implementation to use. Generally, this variable does not need to be specified.
Dictionary values are used to apply an MPI
implementation according to the solver used. Dictionary Keys are
STANDARD
, EXPLICIT
, or
DEFAULTMPI
. Dictionary Values are
PMPI
, IMPI
, or
NATIVE
. On Linux, the default setting is the following dictionary:
mp_mpi_implementation={DEFAULTMPI: IMPI, STANDARD: PMPI}
The constant values can be PMPI
,
IMPI
, or NATIVE
. A
Constant value is used when applicable to all solvers. For example,
mp_mpi_implementation=PMPI
- mp_mpirun_options
-
A dictionary or string of options passed to the MPI launcher for an
MPI-based parallel Abaqus analysis. Generally, this variable does not need to be set.
Dictionary Keys are constants PMPI
,
IMPI
, or NATIVE
. For
example,
mp_mpirun_options={PMPI: '-v -d -t', IMPI: '-verbose'}
A string is used when applicable to all MPI implementations. For example,
mp_mpirun_options='-v'
- mp_mpirun_path
-
A dictionary to define the full path to the MPI launcher for a given MPI
implementation. For example,
mp_mpirun_path={NATIVE: 'C:\\Program Files\\Microsoft MPI\\bin\\mpiexec.exe'}
mp_mpirun_path={PMPI: '/apps/SIMULIA/Abaqus/2023/linux_a64/code/bin/SMAExternal/pmpi/bin/mpirun',
IMPI: '/apps/SIMULIA/Abaqus/2023/linux_a64/code/bin/SMAExternal/impi/bin/mpirun'}
- mp_num_parallel_ftps
-
When performing parallel file staging using MPI-based
parallelization, this parameter controls the number of simultaneous MPI file transfers.
The first item controls the transfer of files to and from the temporary scratch
directory. The second item controls the transfer of files to and from the analysis
working directory. Setting either value to 1 disables the parallel file staging process.
The use of file staging depends on the values specified in
mp_file_system.
- mp_rsh_command
-
Preferred command to open a remote shell on the machines specified by
mp_host_list. Abaqus needs to open a remote shell to create and remove directories and files if the file
system is not shared. The default value for this option is platform- dependent; for
example,
mp_rsh_command='ssh -n -l %U %H %C'
The following placemarkers are used:
- %U
-
Username.
- %H
-
The host where the remote shell is opened.
- %C
-
The command to be executed on the host.
Abaqus automatically uses secure copy (scp
) to copy files to and from
remote hosts if this parameter is set to use secure shell. By default, this parameter is
ignored in favor of built-in MPI rsh/scp
commands.
- order_parallel
-
The default direct solver ordering mode in Abaqus/Standard if you do not specify the parallel ordering mode on the
abaqus command line. If this parameter is set equal to
OFF
, the solver ordering will not be performed in
parallel. If this parameter is set equal to ON
, the
solver ordering will be run in parallel. The default for parallel solver ordering is
ON
.
- parallel
-
The default parallel equation solution method in Abaqus/Explicit if the user does not specify the parallel method on the
abaqus command line. Possible values are
DOMAIN
or LOOP
; the
default value is DOMAIN
.
- standard_parallel
-
The default parallel execution mode in Abaqus/Standard if you do not specify the parallel mode on the abaqus
command line. If this parameter is set equal to ALL
,
both the element operations and the solver will run in parallel. If this parameter is
set equal to SOLVER
, only the solver will run in
parallel. The default parallel execution mode is ALL
.