*Memos:
- My post explains how to create a tensor.
- My post explains how to access a tensor.
- My post explains is_tensor(), numel() and device().
- My post explains type conversion with type(), to() and a tensor.
- My post explains type promotion, result_type(), promote_types() and can_cast().
- My post explains device conversion, from_numpy() and numpy().
- My post explains set_default_dtype(), set_default_device() and set_printoptions().
- My post explains manual_seed(), initial_seed() and seed().
__version__
can check PyTorch version as shown below. *__version__
can be used with torch but not with a tensor:
import torch
torch.__version__ # 2.2.1+cu121
cpu.is_available(), cpu.device_count() or cpu.current_device() can check if CPU is available as shown below:
*Memos:
-
cpu.is_available()
,cpu.device_count()
orcpu.current_device()
can be used with torch but not with a tensor. -
cpu.device_count()
can get the number of CPUs. *It always returns1
: -
cpu.current_device()
can get the index of a currently selected CPU. *It always returnscpu
:
import torch
torch.cpu.is_available() # True
torch.cpu.device_count() # 1
torch.cpu.current_device() # cpu
cuda.is_available() or cuda.device_count() can check if GPU(CUDA) is available as shown below:
*Memos:
-
cuda.is_available()
,cuda.device_count()
, cuda.current_device(), cuda.get_device_name() or cuda.get_device_properties() can be used withtorch
but not with a tensor. cuda.current_device() can return the index of a currently selected GPU. -
cuda.device_count()
can get the number of GPUs. -
cuda.current_device()
can get the index of a currently selected GPU. -
cuda.get_device_name()
can get the name of a GPU. *Memos: -
cuda.get_device_properties()
can return the properties of a GPU. *Memos:- The 1st argument with
torch
isdevice
(Required-Type:str
,int
ordevice()
). - The number must be zero or positive.
- Only
cuda
can be set todevice
. -
My post explains
device()
.
- The 1st argument with
import torch
torch.cuda.is_available() # True
torch.cuda.device_count() # 1
torch.cuda.current_device() # 0
torch.cuda.get_device_name()
torch.cuda.get_device_name(device='cuda:0')
torch.cuda.get_device_name(device='cuda')
torch.cuda.get_device_name(device=0)
torch.cuda.get_device_name(device=torch.device(device='cuda:0'))
torch.cuda.get_device_name(device=torch.device(device='cuda'))
torch.cuda.get_device_name(device=torch.device(device=0))
torch.cuda.get_device_name(device=torch.device(type='cuda'))
torch.cuda.get_device_name(device=torch.device(type='cuda', index=0))
# Tesla T4
torch.cuda.get_device_properties(device='cuda:0')
torch.cuda.get_device_properties(device='cuda')
torch.cuda.get_device_properties(device=0)
torch.cuda.get_device_properties(device=torch.device(device='cuda:0'))
torch.cuda.get_device_properties(device=torch.device(device='cuda'))
torch.cuda.get_device_properties(device=torch.device(device=0))
torch.cuda.get_device_name(device=torch.device(type='cuda'))
torch.cuda.get_device_name(device=torch.device(type='cuda', index=0))
# _CudaDeviceProperties(name='Tesla T4', major=7, minor=5,
# total_memory=15102MB, multi_processor_count=40)
!nvidia-smi can get the information about GPUs as shown below:
!nvidia-smi
Wed May 15 13:18:15 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 56C P0 28W / 70W | 105MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
Top comments (0)