DEV Community

Super Kai (Kazuya Ito)
Super Kai (Kazuya Ito)

Posted on • Updated on

Dot and Matrix-vector multiplication in PyTorch

Buy Me a Coffee

*Memos:

  • My post explains Matrix and Element-wise multiplication in PyTorch.
  • My post explains the functions and operators for Dot and Matrix multiplication and Element-wise calculation in PyTorch.

<Dot multiplication(product)>

  • Dot multiplication is the multiplication of 1D tensors(arrays).
  • The rule which you must follow to do dot multiplication is the number of the rows of A and B tensor(array) must be 1 and the number of the columns must be the same.
   <A>         <B>
[a, b, c] x [d, e, f] = ad+be+cf
1 row       1 row
3 columns   3 columns

[2, -7, 4] x [-5, 0, 8] = 22
                     2x(-5)-7x0+4x8
  [2, -7, 4]
   x   x  x
 [-5,  0, 8]
      ||
 [-10, 0, 32]
   -10+0+32
      ||
      22
Enter fullscreen mode Exit fullscreen mode

In PyTorch with dot(), matmul() or @:

*Memos:

  • dot() can do dot multiplication with two of 1D tensors.
  • matmul() or @ can do dot, matrix-vector or matrix multiplication with two of 1D or more D tensors.
import torch

tensor1 = torch.tensor([2, -7, 4])
tensor2 = torch.tensor([-5, 0, 8])

torch.dot(input=tensor1, tensor=tensor2)
tensor1.dot(tensor=tensor2)
torch.matmul(input=tensor1, other=tensor2)
tensor1.matmul(other=tensor2)
tensor1 @ tensor2
# tensor(22)
Enter fullscreen mode Exit fullscreen mode

In NumPy with dot(), matmul() or @:

*Memos:

  • dot() can do dot, matrix-vector or matrix multiplication with two of 0D or more D arrays. *dot() is basically used to multiply 1D arrays.
  • matmul() or @ can do dot, matrix-vector or matrix multiplication with two of 1D or more D arrays.
import numpy

array1 = numpy.array([2, -7, 4])
array2 = numpy.array([-5, 0, 8])

numpy.dot(array1, array2)
array1.dot(array2)
numpy.matmul(array1, array2)
array1 @ array2
# 22
Enter fullscreen mode Exit fullscreen mode

<Matrix-vector multiplication(product)>

  • Matrix-vector multiplication is the multiplication of a 2D or more D tensor(array) and 1D tensor(array). *The order must be a 2D or more D tensor and 1D tensor but not a 1D tensor and 2D or more D tensor(array).
  • The rule which you must follow to do matrix-vector multiplication is the number of the columns of A and B tensor(array) must be the same.

A 2D and 1D tensor(array):

    <A>          <B>
[[a, b, c], [d, e, f]] x [g, h, i] = [ag+bh+ci, dg+eh+fi]
2 rows                   1 row
(3) columns              (3) columns

[[2, -7, 4], [6, 3, -1]] x [-5, 0, 8] = [22, -38]
                            [2x(-5)-7x0+4x8, 6x(-5)+3x0-1x8]
 [[2, -7, 4], [6, 3, -1]]
   x   x  x    x  x   x
 [-5,  0, 8] [-5, 0,  8]
      ||          ||
[-10, 0, 32] [-30, 0, -8]
  -10+0+32     -30+0-8
      ||          ||
     [22,        -38]
Enter fullscreen mode Exit fullscreen mode

In PyTorch with matmul(), mv() or @. *mv() can do matrix-vector multiplication with a 2D tensor and 1D tensor:

import torch

tensor1 = torch.tensor([[2, -7, 4], [6, 3, -1]])
tensor2 = torch.tensor([-5, 0, 8])

torch.matmul(input=tensor1, other=tensor2)
tensor1.matmul(other=tensor2)
torch.mv(input=tensor1, vec=tensor2)
tensor1.mv(vec=tensor2)
tensor1 @ tensor2
# tensor([22, -38])
Enter fullscreen mode Exit fullscreen mode

In NumPy with dot(), matmul() or @:

import numpy

array1 = numpy.array([[2, -7, 4], [6, 3, -1]])
array2 = numpy.array([-5, 0, 8])

numpy.dot(array1, array2)
array1.dot(array2)
numpy.matmul(array1, array2)
array1 @ array2
# array([22, -38])
Enter fullscreen mode Exit fullscreen mode

A 3D and 1D tensor(array):

*The 3D tensor(array) of A has three of 2D tensors(arrays) which have 2 rows and 3 columns each.

     <A>                      <B>
[[[a, b, c], [d, e, f]], x [s, t, u] = [[[as+bt+cu, ds+et+fu]],
 [[g, h, i], [j, k, l]],                [[gs+ht+iu, js+kt+lu]],
 [[m, n, o], [p, q, r]]]                [[ms+nt+ou, ps+qt+ru]]]
2 rows                     1 row
(3) columns                (3) columns

[[[2, -7, 4], [6, 3, -1]] x [-5, 0, 8] = [[22, -38],
 [[-4, 9, 0], [5, 8, -2]],                [20, -41],
 [[-6, 7, 1], [0, -9, 5]]]                [38, 40]])
                             [[2x(-5)-7x0+4x8, 6x(-5)+3x0-1x8],
                              [-4x(-5)+9x0+0x8, 5x(-5)+8x0-2x8],
                              [-6x(-5)+7x0+1x8, 0x(-5)-9x0+5x8]]
Enter fullscreen mode Exit fullscreen mode

In PyTorch with matmul() or @:

import torch

tensor1 = torch.tensor([[[2, -7, 4], [6, 3, -1]],
                        [[-4, 9, 0], [5, 8, -2]],
                        [[-6, 7, 1], [0, -9, 5]]])
tensor2 = torch.tensor([-5, 0, 8])

torch.matmul(input=tensor1, other=tensor2)
tensor1.matmul(other=tensor2)
tensor1 @ tensor2
# tensor([[22, -38],
#         [20, -41],
#         [38, 40]])
Enter fullscreen mode Exit fullscreen mode

In NumPy with dot(), matmul() or @:

import numpy

array1 = numpy.array([[[2, -7, 4], [6, 3, -1]],
                      [[-4, 9, 0], [5, 8, -2]],
                      [[-6, 7, 1], [0, -9, 5]]])
array2 = numpy.array([-5, 0, 8])

numpy.dot(array1, array2)
array1.dot(array2)
numpy.matmul(array1, array2)
array1 @ array2
# array([[22, -38],
#        [20, -41],
#        [38, 40]])
Enter fullscreen mode Exit fullscreen mode

Top comments (0)