当前位置:网站首页>[pytroch series - 11]: pytorch basis - ordering of tensor tensor elements
[pytroch series - 11]: pytorch basis - ordering of tensor tensor elements
2021-08-08 15:42:21 【Silicon based workshop of slow fire rock sugar】
The first 1 Chapter Tensor Operation Overview
1.1 summary
PyTorch Provides a large number of tensor operations , Basically, it can be benchmarked Numpy Operation of multidimensional array , To support various complex operations on tensors .
Most of these operations perform the same function operation on each element in the array , And obtain the result sequence of each element function operation , These sequences generate a new array of the same dimension .
https://www.runoob.com/numpy/numpy-linear-algebra.html
1.2 Operational classification
(1) Arithmetic operations : Add 、 reduce 、 Factor times 、 Coefficient Division
(2) Function operation :sin,cos
(3) Rounding operation : Round up 、 Round down
(4) Statistical operation : Maximum 、 minimum value 、 mean value
(5) Comparison operations : Greater than , be equal to , Less than 、 Sort
(6) Linear algebra operations : matrix 、 Point multiplication 、 Cross riding
1.3 “in place“ operation
“in place“ The operation is not a specific function operation , But each function has its own “in place“ The version of the operation .
xxx_(): Finish this operation , Directly modifying tensor Own value .
Basically , Each function has its own in place edition .
Such as
torch.cos() =》torch.cos_()
torch.floor() =》torch.floor_()
1.4 Tensor The broadcast mechanism of : Tensor operations in different dimensions
1.5 Environmental preparation
import numpy as np
import torch
print("Hello World")
print(torch.__version__)
print(torch.cuda.is_available())
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
1.6 Overview of sorting operations
(1) sort: Sort
- It can be sorted according to different tensor dimensions , Yes dim Appoint .
- You can sort in ascending or descending order , Yes descending Sort .
- out: Store the tensor of the output result
(2)topk: Before acquisition K Elements .
- k: Number of elements to get
- dim: Appoint dim Direction , Not all the elements
- largest: Take the largest element or the smallest element
- sorted: Whether the vector has been sorted , If not ordered , This function sorts the tensors first .
- out: Store the tensor of the output result
(3)kthvalue: For the first K Minimum
- k: Specifies the sequence number of the element to be obtained
- dim: Specifies the tensor dimension direction of the operation
- out: Store the tensor of the output result
The first 2 Chapter Code example :sort( Sort )
# Sort sort
a = torch.Tensor([[5,4,1],[6,3,2]])
print (" Source data :")
print (a)
print ('\n')
print(" Default sort ")
print(torch.sort(a))
print("\ndim=1 Sort ")
print(torch.sort(a,dim=1))
print("\ndim=0 Sort ")
print(torch.sort(a,dim=0))
print("\n Member function sort ")
print(a.sort(dim=1, descending = True))
print ("\n Source data :")
print(a)
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
# Output :
Source data :
tensor([[5., 4., 1.],
[6., 3., 2.]])
Default sort
torch.return_types.sort(
values=tensor([[1., 4., 5.],
[2., 3., 6.]]),
indices=tensor([[2, 1, 0],
[2, 1, 0]]))
dim=1 Sort
torch.return_types.sort(
values=tensor([[1., 4., 5.],
[2., 3., 6.]]),
indices=tensor([[2, 1, 0],
[2, 1, 0]]))
dim=0 Sort
torch.return_types.sort(
values=tensor([[5., 3., 1.],
[6., 4., 2.]]),
indices=tensor([[0, 1, 0],
[1, 0, 1]]))
Member function sort
torch.return_types.sort(
values=tensor([[5., 4., 1.],
[6., 3., 2.]]),
indices=tensor([[0, 1, 2],
[0, 1, 2]]))
Source data :
tensor([[5., 4., 1.],
[6., 3., 2.]])
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
- 28.
- 29.
- 30.
- 31.
- 32.
- 33.
- 34.
- 35.
- 36.
- 37.
- 38.
remarks :
- The default is dim = 1 Sort in the direction of
- The default is to sort in ascending order
- After ordering , Data that does not affect the source tensor
- Can pass torch.sort Sort , You can also use the member function of the tensor a.sort Sort
- dim = 1: Sort each row vector
- dim = 1: Sort each column vector
The first 3 Chapter Code example :topk( Before acquisition K It's the best value )
# topk
a = torch.Tensor([[2,4,6,12,8,10],[1,1,5,11,11,9]])
print (" Source data :")
print (a)
print ("\n Default parameters :")
print(torch.topk(a,2))
print ("\n take K A maximum of :")
b = torch.topk(a,k=2, largest=True)
print(b)
print ("\n take K Minimum :")
torch.topk(a,k=2, largest=False, out=b)
print(b)
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
Output :
Source data :
tensor([[ 2., 4., 6., 12., 8., 10.],
[ 1., 1., 5., 11., 11., 9.]])
Default parameters :
torch.return_types.topk(
values=tensor([[12., 10.],
[11., 11.]]),
indices=tensor([[3, 5],
[3, 4]]))
take K A maximum of :
torch.return_types.topk(
values=tensor([[12., 10.],
[11., 11.]]),
indices=tensor([[3, 5],
[3, 4]]))
take K Minimum :
torch.return_types.topk(
values=tensor([[2., 4.],
[1., 1.]]),
indices=tensor([[0, 1],
[0, 1]]))
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
remarks :
- For elements of the same value , Still involved in sorting , Elements with the same value are not excluded .
- The default is k A maximum of
- After the operation , It has no effect on the source tensor data
The first 4 Chapter Code example :kthvalue( Take the first place K Minimum )
# kthvalue: Take the first place K A sorted element
a = torch.Tensor([[2,4,6,12,8,0],[3,7,5,11,1,9]])
print (" Source data :")
print (a)
print ("\n The sorted value of the source data :")
print (torch.sort(a))
print ("\n The default parameter is no 3 Minimum ::")
print(torch.kthvalue(a,3))
print ("\n Press dim=1 Take the second direction 3 Minimum :")
b = torch.kthvalue(a,k=3, dim=1)
print(b)
print ("\n Press dim=0 Take the second direction 1 Minimum :")
torch.kthvalue(a, k=1, dim=0, out=b)
print(b)
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
Output :
Source data :
tensor([[ 2., 4., 6., 12., 8., 0.],
[ 3., 7., 5., 11., 1., 9.]])
The sorted value of the source data :
torch.return_types.sort(
values=tensor([[ 0., 2., 4., 6., 8., 12.],
[ 1., 3., 5., 7., 9., 11.]]),
indices=tensor([[5, 0, 1, 2, 4, 3],
[4, 0, 2, 1, 5, 3]]))
The default parameter is no 3 Minimum ::
torch.return_types.kthvalue(
values=tensor([4., 5.]),
indices=tensor([1, 2]))
Press dim=1 Take the second direction 3 Minimum :
torch.return_types.kthvalue(
values=tensor([4., 5.]),
indices=tensor([1, 2]))
Press dim=0 Take the second direction 1 Minimum :
torch.return_types.kthvalue(
values=tensor([ 2., 4., 5., 11., 1., 0.]),
indices=tensor([0, 0, 1, 1, 1, 0]))
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
Output :
Source data :
tensor([[ 2., 4., 6., 12., 8., 0.],
[ 3., 7., 5., 11., 1., 9.]])
The sorted value of the source data :
torch.return_types.sort(
values=tensor([[ 0., 2., 4., 6., 8., 12.],
[ 1., 3., 5., 7., 9., 11.]]),
indices=tensor([[5, 0, 1, 2, 4, 3],
[4, 0, 2, 1, 5, 3]]))
The default parameter is no 3 Minimum ::
torch.return_types.kthvalue(
values=tensor([4., 5.]),
indices=tensor([1, 2]))
Press dim=1 Take the second direction 3 Minimum :
torch.return_types.kthvalue(
values=tensor([4., 5.]),
indices=tensor([1, 2]))
Press dim=0 Take the second direction 1 Minimum :
torch.return_types.kthvalue(
values=tensor([ 2., 4., 5., 11., 1., 0.]),
indices=tensor([0, 0, 1, 1, 1, 0]))
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
- 18.
- 19.
- 20.
- 21.
- 22.
- 23.
- 24.
- 25.
- 26.
- 27.
- 28.
remarks :
- The default is dim = 1 The direction of , That is, compare from the sequence of row vectors .
- stay dim In the sequence of specified directions , Take the first place K The lowest value after sorting . This means that the operation is to arrange the sequence in ascending order .
版权声明
本文为[Silicon based workshop of slow fire rock sugar]所创,转载请带上原文链接,感谢
https://chowdera.com/2021/08/20210808153459099S.html
边栏推荐
- Fourth in the world! Wang Sicong installed a server "readily". Netizen: trench is inhuman
- [Tencent classroom] creator zero foundation immortal practice is online!
- 跟着华为,学数字化转型(3):模式创新
- 记一次接口慢查排查
- Follow Huawei and learn digital transformation (3): mode innovation
- Record an interface slow check and troubleshooting
- @Autowired的这些骚操作,你都知道吗?
- ss -h命令
- @Do you know all these operations of Autowired?
- 使用Yolo v5进行目标检测
猜你喜欢
-
Yazid的新生舞会(线段树)
-
当creator遇上protobufjs|孕育
-
Identify and stop the process that‘s listening on port 8080 or configure this application to listen
-
为什么要推荐大家学习字节码?
-
揭秘!价值百万的像素填色解决方案,想开发绘本应用的有福了!
-
[PyTroch系列-11]:PyTorch基础 - 张量Tensor元素的排序
-
[PyTroch系列-12]:PyTorch基础 - 张量Tensor线性运算(点乘、叉乘)
-
【环境篇】第 3 节 • Navicat 环境安装
-
预训练语言模型的前世今生 - 从Word Embedding到BERT
-
讲道理,只要你是一个爱折腾的程序员,毕业找工作真的不需要再花钱培训!
随机推荐
- 华南理工 | 基于生成式的低比特无数据量化
- 微信小程序授权位置和用户信息权限(防止用户禁止后无法使用位置信息)
- 一行代码快速实现今日头条 网易新闻焦点图自动循环轮播效果
- 因果涌现:数学理论揭示整体怎样大于部分之和
- 年收入百万美元AI科学家的烦恼
- API《为什么奥运会以五色环为标志?》数据源接口
- 用一张草图创建GAN模型,新手也能玩转,朱俊彦团队新研究入选ICCV 2021
- UIUC | 用于语言模型的课程学习
- SS - H command
- Target detection using Yolo V5
- Yazid's freshman ball (thread tree)
- When creator meets protobufjs 𞓜
- 我敢肯定!你还没用过一款代码神器,只属于Creator的用户!
- 小程序页面跳转&&文章详情页的实现&&markdown格式转化为wxml显示在小程序页面里
- 49个项目管理过程ITTO整理(详细)
- 49个项目管理过程ITTO整理(详细-文字版)
- 只是想虐下春丽,一不小心撸了台游戏机...
- Cocos论坛九问九答
- Identify and stop the process that‘s listening on port 8080 or configure this application to listen
- 超详细的I/O多路复用概念、常用I/O模型、系统调用等介绍
- Why recommend learning bytecode?
- SAP Commerce Cloud UI 的用户会话管理
- 以太坊 交易 data字段 内容是什么
- SAP CRM Fiori 应用 My Note 里创建 Note 失败的一个原因分析
- 当creator遇上protobufjs|pbkiller填坑历险记
- Uncover the secret! Millions of pixel color filling solutions. Blessed are those who want to develop picture book applications!
- [pytroch series - 11]: pytorch basis - ordering of tensor tensor elements
- [pytroch series - 12]: pytorch basis tensor tensor linear operation (point multiplication, cross multiplication)
- [environment] section 3 • Navicat environment installation
- The past and present life of pre training language model - from word embedding to Bert
- Make sense, as long as you are a tossing programmer, you really don't need to spend money on training to find a job after graduation!
- South China Technology | low bit no data quantization based on generative
- Wechat applet authorizes location and user information permissions (to prevent users from being unable to use location information after prohibition)
- One line of code can quickly realize the automatic circular rotation effect of today's headlines and Netease News focus map
- Causal emergence: mathematical theory reveals how the whole is greater than the sum of parts
- The troubles of AI scientists with an annual income of millions of dollars
- API "why is the Olympic Games marked by five color rings?" Data source interface
- Create a GaN model with a sketch, which can be played by novices. The new research of Zhu Junyan's team was selected into iccv 2021
- UIUC | course learning for language model
- I'm sure! You haven't used a code artifact yet. It only belongs to creator users!