当前位置:网站首页>[pytroch series - 12]: pytorch basis tensor tensor linear operation (point multiplication, cross multiplication)

[pytroch series - 12]: pytorch basis tensor tensor linear operation (point multiplication, cross multiplication)

2021-08-08 15:42:26 Silicon based workshop of slow fire rock sugar

 



  The first 1 Chapter Tensor Operation Overview

1.1 summary

PyTorch Provides a large number of tensor operations , Basically, it can be benchmarked Numpy Operation of multidimensional array , To support various complex operations on tensors .

Most of these operations perform the same function operation on each element in the array , And obtain the result sequence of each element function operation , These sequences generate a new array of the same dimension .

 https://www.runoob.com/numpy/numpy-linear-algebra.html

 

1.2 Operational classification

(1) Arithmetic operations : Add 、 reduce 、 Factor times 、 Coefficient Division

(2) Function operation :sin,cos

(3) Rounding operation : Round up 、 Round down

(4) Statistical operation : Maximum 、 minimum value 、 mean value

(5) Comparison operations : Greater than , be equal to , Less than 、 Sort

(6) Linear algebra operations : matrix 、 Point multiplication 、 Cross riding

 

1.3  “in place“ operation  

“in place“ The operation is not a specific function operation , But each function has its own “in place“ The version of the operation .

xxx_(): Finish this operation , Directly modifying tensor Own value .

Basically , Each function has its own in place edition .

Such as

torch.cos() =》torch.cos_()

torch.floor() =》torch.floor_()

 

1.4 Tensor The broadcast mechanism of : Tensor operations in different dimensions

 

1.5 Environmental preparation

import numpy as np
import torch
 
print("Hello World")
print(torch.__version__)
print(torch.cuda.is_available())
  
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.

1.6 Linear algebraic operations of tensors

(1) Point multiplication :dot(a,b)

(2) Inner product : inner(a,b)

(3) Cross riding :matmul(a,b)

remarks :

Similarities and differences between dot multiplication and inner product :

  • The same thing : The basic operation of dot multiplication is the same as that of inner product : Each element is multiplied and then added .
  • Difference : Point multiplication only supports two one-dimensional tensor point multiplication , The inner product supports the inner product of multidimensional tensors

Dot multiplication and cross multiplication :

  • The same thing : Dot multiplication is the foundation , That is, the corresponding elements are multiplied and added .
  • Difference : Point multiplication only supports two one-dimensional tensor point multiplication , Cross multiplication supports multidimensional tensors , The data in each dimension is a point multiplication .

 

The first 2 Chapter Dot product of vector ( It's the foundation ):dot()

2.1 Definition

In a nutshell , The inner product of a vector ( Point multiplication / The product of quantity ).

Dot multiply two vectors , It is the sum operation after multiplying the corresponding bits of these two vectors one by one , As shown below , For vector a Sum vector b:

 

Be careful :

  •   This requires a one-dimensional vector a Sum vector b The number of rows and columns is the same .
  • The result of dot multiplication is a scalar ( Quantity, not vector )

 

2.2 The geometric meaning of vector inner product

(1) Can be used to calculate the angle between two vectors .

   θ=arccos⁡(a∙b/|a||b|)

(2)b Vector in a Projection in the direction of vector and a Multiply

 |a| = The square sum of all elements and the open root sign , It's actually a vector a The length of .

 |b| = The square sum of all elements and the open root sign , It's actually a vector b The length of .

a.b = a1*b1 + a2*b2 ..... an*bn

(3) Whether orthogonal indication :

If the result of dot multiplication is 0, said a stay b The projection on is 0, Express a and b It is orthogonal. .

If orthogonal , It means that these two vectors are irrelevant .

 

2.3 Code example

# Dot product of vector ( Dot product ) operation 
a = torch.Tensor([1,2,3])
b = torch.Tensor([1,1,1])
print(a)
print(b)

print(torch.dot(a,b)) #  Equivalent to  1*0+2*1+3*0
  
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
 Output :

tensor([1., 2., 3.])
tensor([1., 1., 1.])
tensor(6.)
  
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.

 

# Dot product of vector ( Dot product ) operation 
a = torch.Tensor([1,2,3])
b = torch.Tensor([1,1,1])
print(a)
print(b)

print(torch.vdot(a,b)) #  Equivalent to  1*0+2*1+3*0
  
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
 Output :

tensor([1., 2., 3.])
tensor([1., 1., 1.])
tensor(6.)
  
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.

 

The first 3 Chapter Cross products of vectors

3.1 Definition

The outer product of two vectors , Also called cross multiplication 、 Cross product vector product , The result is a vector, not a scalar .

And the outer product of the two vectors is perpendicular to the coordinate plane composed of the two vectors .

Definition : vector a And b Outer product of a×b It's a vector , Its length is equal to |a×b| = |a||b|sin∠(a,b), Its direction is orthogonal to a And b. also ,(a,b,a×b) Form a right-handed system . 
Specially ,0×a = a×0 = 0. Besides , For any vector a, Self multiplication a×a=0.

For vector a Sum vector b:

a and b The outer product formula of is ( What you get is the vector of the original dimension ):

3.2 Geometric meaning

In 3D Geometry , vector a Sum vector b The result of the outer product of is a vector , A more common name is normal vector , The vector is perpendicular to a and b A plane of vectors .

stay 3D In iconography , The concept of outer product is very useful , You can use the outer product of two vectors , Generate a third perpendicular to a,b The normal vector of , To build X、Y、Z Coordinate system . As shown in the figure below :

3.3 Code example

#  Cross products of vectors ( The product of ) operation 
a = torch.Tensor([1,2,3])
b = torch.Tensor([1,1,1])
print(a)
print(b)

print(torch.multiply(a,b))
  
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
 Output :

tensor([1., 2., 3.])
tensor([1., 1., 1.])
tensor([1., 2., 3.])
  
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.

 

The first 4 Chapter   Inner product operation of matrix ( Corresponding ):inner()

4.1 Definition of matrix inner product

Two matrices of the same dimension a and b,a and b The inner product of a matrix is the inner product of a vector at the same position .

(1) Vector inner product

 (2) The inner product of a vector matrix :

 

4.2 Code example

#  Inner product operation of matrix 
a = torch.Tensor([1,2,3])
b = torch.Tensor([0,1,0])
print(a)
print(b)

print(torch.inner(a,b)) #  Equivalent to  1*0+2*1+3*0
print("")

a = torch.Tensor([[0,1,0], [0,2,0]])
b = torch.Tensor([[0,3,0], [0,4,0]])
print(a)
print(b)
print(torch.inner(a,b)) #  Equivalent to the inner product of each vector 
  
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
 Output :

tensor([1., 2., 3.])
tensor([0., 1., 0.])
tensor(2.)

tensor([[0., 1., 0.],
        [0., 2., 0.]])
tensor([[0., 3., 0.],
        [0., 4., 0.]])
tensor([[3., 4.],
        [6., 8.]])
  
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.

 

The first 5 Chapter The outer product of a matrix : matmul()

5.1 Matrix outer product ( matrix product ) The definition of ( matrix multiplication )

  matrix The most important method of multiplication is the general matrix product . It only has the number of columns in the first matrix (column) And the number of rows of the second matrix (row) The same makes sense .

(1) The product of vectors

(2) The product of matrices

5.2 Code example

#  Exoproduct 
a = torch.Tensor([1,2,3])  #  amount to 1* N
b = torch.Tensor([0,1,0]) #  amount to N * 1
print(a)
print(b)
print(torch.matmul(a,b)) #  Equivalent to  1*0+2*1+3*0

print("")

a = torch.Tensor([[1,2,3], [4,5,6]])
b = torch.Tensor([[0,1], [1,1], [1,1]])

print(a)
print(b)
print(torch.matmul(a,b)) # X * N VS N * Y => X * Y

  
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
 Output :

tensor([1., 2., 3.])
tensor([0., 1., 0.])
tensor(2.)

tensor([[1., 2., 3.],
        [4., 5., 6.]])
tensor([[0., 1.],
        [1., 1.],
        [1., 1.]])
tensor([[ 5.,  6.],
        [11., 15.]])

  
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.

 

 

版权声明
本文为[Silicon based workshop of slow fire rock sugar]所创,转载请带上原文链接,感谢
https://chowdera.com/2021/08/20210808153459103h.html

随机推荐