hoomd.device
Overview
Select the CPU to execute simulations. |
|
Base class device object. |
|
Select a GPU or GPU(s) to execute simulations. |
|
Automatically select the hardware device. |
Details
Devices.
Use a Device
class to choose which hardware device(s) should execute the
simulation. Device
also sets where to write log messages and how verbose
the message output should be. Pass a Device
object to hoomd.Simulation
on instantiation to set the options for that simulation.
User scripts may instantiate multiple Device
objects and use each with a
different hoomd.Simulation
object. One Device
object may also be shared
with many hoomd.Simulation
objects.
Tip
Reuse Device
objects when possible. There is a non-negligible overhead
to creating each Device
, especially on the GPU.
See also
- class hoomd.device.CPU(num_cpu_threads=None, communicator=None, msg_file=None, notice_level=2)
Bases:
Device
Select the CPU to execute simulations.
- Parameters
num_cpu_threads (int) – Number of TBB threads. Set to
None
to auto-select.communicator (hoomd.communicator.Communicator) – MPI communicator object. When
None
, create a default communicator that uses all MPI ranks.msg_file (str) – Filename to write messages to. When
None
usesys.stdout
andsys.stderr
. Messages from multiple MPI ranks are collected into this file.notice_level (int) – Minimum level of messages to print.
MPI
In MPI execution environments, create a
CPU
device on every rank.
- class hoomd.device.Device(communicator, notice_level, msg_file)
Bases:
object
Base class device object.
Provides methods and properties common to
CPU
andGPU
, including those that control where status messages are stored (msg_file
) how many status messages HOOMD-blue prints (notice_level
) and a method for user provided status messages (notice
).TBB threads
Set
num_cpu_threads
toNone
and TBB will auto-select the number of CPU threads to execute. If the environment variableOMP_NUM_THREADS
is set, HOOMD will use this value. You can also setnum_cpu_threads
explicitly.Note
At this time very few features use TBB for threading. Most users should employ MPI for parallel simulations. See Features for more information.
- property communicator
The MPI Communicator [read only].
- property msg_file
Filename to write messages to.
By default, HOOMD prints all messages and errors to Python’s
sys.stdout
andsys.stderr
(or the system’sstdout
andstderr
when running in an MPI environment).Set
msg_file
to a filename to redirect these messages to that file.Set
msg_file
toNone
to use the system’sstdout
andstderr
.Note
All MPI ranks within a given partition must open the same file. To ensure this, the given file name on rank 0 is broadcast to the other ranks. Different partitions may open separate files. For example:
communicator = hoomd.communicator.Communicator( ranks_per_partition=2) filename = f'messages.{communicator.partition}' device = hoomd.device.GPU(communicator=communicator, msg_file=filename)
- Type
- notice(message, level=1)
Write a notice message.
Write the given message string to the output defined by
msg_file
on MPI rank 0 whennotice_level
>=level
.
- property notice_level
Minimum level of messages to print.
notice_level
controls the verbosity of messages printed by HOOMD. The default level of 2 shows messages that the developers expect most users will want to see. Set the level lower to reduce verbosity or as high as 10 to get extremely verbose debugging messages.- Type
- class hoomd.device.GPU(gpu_ids=None, num_cpu_threads=None, communicator=None, msg_file=None, notice_level=2)
Bases:
Device
Select a GPU or GPU(s) to execute simulations.
- Parameters
gpu_ids (list[int]) – List of GPU ids to use. Set to
None
to let the driver auto-select a GPU.num_cpu_threads (int) – Number of TBB threads. Set to
None
to auto-select.communicator (hoomd.communicator.Communicator) – MPI communicator object. When
None
, create a default communicator that uses all MPI ranks.msg_file (str) – Filename to write messages to. When
None
, usesys.stdout
andsys.stderr
. Messages from multiple MPI ranks are collected into this file.notice_level (int) – Minimum level of messages to print.
Tip
Call
GPU.get_available_devices
to get a human readable list of devices.gpu_ids = [0]
will select the first device in this list,[1]
will select the second, and so on.The ordering of the devices is determined by the GPU driver and runtime.
Device auto-selection
When
gpu_ids
isNone
, HOOMD will ask the GPU driver to auto-select a GPU. In most cases, this will select device 0. When all devices are set to a compute exclusive mode, the driver will choose a free GPU.MPI
In MPI execution environments, create a
GPU
device on every rank. Whengpu_ids
is leftNone
, HOOMD will attempt to detect the MPI local rank environment and choose an appropriate GPU withid = local_rank % num_capable_gpus
. Setnotice_level
to 3 to see status messages from this process. Override this auto-selection by providing appropriate device ids on each rank.Multiple GPUs
Specify a list of GPUs to
gpu_ids
to activate a single-process multi-GPU code path.Note
Not all features are optimized to use this code path, and it requires that all GPUs support concurrent managed memory access and have high bandwidth interconnects.
- property compute_capability
Compute capability of the device.
The tuple includes the major and minor versions of the CUDA compute capability:
(major, minor)
.
- enable_profiling()
Enable GPU profiling.
When using GPU profiling tools on HOOMD, select the option to disable profiling on start. Initialize and run a simulation long enough that all autotuners have completed, then open
enable_profiling()
as a context manager and continue the simulation for a time. Profiling stops when the context manager closes.Example:
with device.enable_profiling(): sim.run(1000)
- static get_available_devices()
Get the available GPU devices.
Get messages describing the reasons why devices are unavailable.
- property gpu_error_checking
Whether to check for GPU error conditions after every call.
When
False
(the default), error messages from the GPU may not be noticed immediately. Set toTrue
to increase the accuracy of the GPU error messages at the cost of significantly reduced performance.- Type
- static is_available()
Test if the GPU device is available.
- property memory_traceback
Whether GPU memory tracebacks should be enabled.
Memory tracebacks are useful for developers when debugging GPU code.
Deprecated since version v3.4.0:
memory_traceback
has no effect.- Type
- hoomd.device.auto_select(communicator=None, msg_file=None, notice_level=2)
Automatically select the hardware device.
- Parameters
communicator (hoomd.communicator.Communicator) – MPI communicator object. When
None
, create a default communicator that uses all MPI ranks.msg_file (str) – Filename to write messages to. When
None
usesys.stdout
andsys.stderr
. Messages from multiple MPI ranks are collected into this file.notice_level (int) – Minimum level of messages to print.
- Returns