DEV Community

Super Kai (Kazuya Ito)
Super Kai (Kazuya Ito)

Posted on • Updated on

BatchNorm1d() in PyTorch

Buy Me a Coffee

*Memos:

BatchNorm1d() can get the 2D or 3D tensor of the zero or more elements computed by 1D Batch Normalization from the 2D or 3D tensor of zero or more elements as shown below:

*Memos:

  • The 1st argument for initialization is num_features(Required-Type:int). *It must be 1 <= x.
  • The 2nd argument for initialization is eps(Optional-Default:1e-05-Type:float).
  • The 3rd argument for initialization is momentum(Optional-Default:0.1-Type:float).
  • The 4th argument for initialization is affine(Optional-Default:True-Type:bool).
  • The 5th argument for initialization is track_running_stats(Optional-Default:True-Type:bool).
  • The 6th argument for initialization is device(Optional-Default:None-Type:str, int or device()). *Memos:
  • The 7th argument for initialization is dtype(Optional-Default:None-Type:int). *Memos:
  • The 1st argument is input(Required-Type:tensor of float).
  • The tensor's requires_grad which is False by default is set to True by BatchNorm1d().
  • Input tensor's device and dtype must be same as BatchNorm1d()'s device and dtype respectively.
  • batchnorm1d1.device and batchnorm1d1.dtype don't work.
import torch
from torch import nn

tensor1 = torch.tensor([[8., -3., 0.],
                        [1., 5., -2.]])
tensor1.requires_grad
# False

batchnorm1d1 = nn.BatchNorm1d(num_features=3)
tensor2 = batchnorm1d1(input=tensor1)
tensor2
# tensor([[1.0000, -1.0000, 1.0000],
#         [-1.0000, 1.0000, -1.0000]],
#        grad_fn=<NativeBatchNormBackward0>)

tensor2.requires_grad
# True

batchnorm1d1
# BatchNorm1d(3, eps=1e-05, momentum=0.1, affine=True,
#             track_running_stats=True)

batchnorm1d1.num_features
# 3

batchnorm1d1.eps
# 1e-05

batchnorm1d1.momentum
# 0.1

batchnorm1d1.affine
# True

batchnorm1d1.track_running_stats
# True

batchnorm1d2 = nn.BatchNorm1d(num_features=3)
batchnorm1d2(input=tensor2)
# tensor([[1.0000, -1.0000, 1.0000],
#         [-1.0000, 1.0000, -1.0000]],
#        grad_fn=<NativeBatchNormBackward0>)

batchnorm1d = nn.BatchNorm1d(num_features=3, eps=1e-05, momentum=0.1, 
                             affine=True, track_running_stats=True, 
                             device=None, dtype=None)
batchnorm1d(input=tensor1)
# tensor([[1.0000, -1.0000, 1.0000],
#         [-1.0000, 1.0000, -1.0000]],
#        grad_fn=<NativeBatchNormBackward0>)

my_tensor = torch.tensor([[8.], [-3.], [0.], [1.], [5.], [-2.]])

batchnorm1d = nn.BatchNorm1d(num_features=1)
batchnorm1d(input=my_tensor)
# tensor([[1.6830], [-1.1651], [-0.3884], [-0.1295], [0.9062], [-0.9062]], 
#        grad_fn=<NativeBatchNormBackward0>)

my_tensor = torch.tensor([[[8.], [-3.], [0.]],
                          [[1.], [5.], [-2.]]])
batchnorm1d = nn.BatchNorm1d(num_features=3)
batchnorm1d(input=my_tensor)
# tensor([[[1.0000], [-1.0000], [1.0000]],
#         [[-1.0000], [1.0000], [-1.0000]]],
#        grad_fn=<NativeBatchNormBackward0>)
Enter fullscreen mode Exit fullscreen mode

Top comments (0)