当前位置:网站首页>Tensorflow tf.tile " tf.slice On the basic usage of equal function
Tensorflow tf.tile " tf.slice On the basic usage of equal function
2020-11-06 01:27:46 【Elementary school students in IT field】
List of articles
tf.tile
Reading :
tensorflow Medium tile() Function is used for tensor (Tensor) extended , Its characteristic is to copy the data in the current tensor regularly . The final output tensor dimension remains the same .
Function definition :
tf.tile(
input,
multiples,
name=None)
input It's the tensor to be expanded ,multiples It's the extension method .
If input It's a 3 D tensor . that mutiples It has to be one 1x3 Of 1 D tensor . The three values of this tensor in turn represent input Of the 1, The first 2, The first 3 How many times does the dimensional data expand .
Give me an example :
import tensorflow as tf
a = tf.constant([[1, 2], [3, 4], [5, 6]], dtype=tf.float32)
a1 = tf.tile(a, [2, 3])
with tf.Session() as sess:
print(sess.run(a))
print(sess.run(a1))
tf.tile() The specific operation process is as follows :
Please note that : After the first expansion in the above drawing, the first dimension changes from three data to two lines and six data , One more line is not one more dimension , Throw the data in order , It's just for drawing .
Each one-dimensional data extension is to copy the previous data and then directly connect it to the original data .
If multiples One of the data is 1, It means that the dimension data remains unchanged .
Example :
import tensorflow as tf
def test12():
a = tf.constant([[1, 2], [3, 4], [5, 6]], dtype=tf.float32)
print(a)
doc_y = tf.tile(a, [1, 1])
print(doc_y)
with tf.Session() as sess:
print(sess.run(a))
print(sess.run(doc_y))
[[ 1. 2.]
[ 3. 4.]
[ 5. 6.]]
[[ 1. 2.]
[ 3. 4.]
[ 5. 6.]]
tf.slice
tf.slice(input_, begin, size, name = None)
explain :
This function is used to input data from input To extract a slice
The size of the slice is size, The beginning of the slice is begin.
The size of the slice size Indicative output tensor The data dimension of , among size[i] Said in the first i Number of elements above the dimension .
Starting position begin Represents the slice relative to the input data input_ Every offset of , Like data input yes
[[[1, 1, 1], [2, 2, 2]],
[[33, 3, 3], [4, 4, 4]],
[[5, 5, 5], [6, 6, 6]]],
begin by [1, 0, 0], So the starting point of the data is 33. because , The first dimension is shifted 1, The rest of you are not offset , So the starting position is 33.
Operation meets :
size[i] = input.dim_size(i) - begin[i]
0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]
Here's an explanation tf.slice() Function and usage of ;
silce_1, The starting point of the slice is [0,0,0], The size of the slice is [1,1,3]; So I'm going to cut one from the origin [1,1,3] The data of , It's a batch of (1,3)
slice_2, The starting point of the slice is [1,0,0], The size of the slice is [1,2,3]; It means slicing from the data of the second batch , Cut off a batch of (2,3) The data of
slice_3, The starting point of the slice is still [1,0,0], The size of the slice is [2,1,3]; From the second batch , Cut two batches of (1,3) The data of
Example :
import tensorflow as tf
sess = tf.Session()
input = tf.constant([[[1, 1, 1], [2, 2, 2]],
[[3, 3, 3], [4, 4, 4]],
[[5, 5, 5], [6, 6, 6]]])
data = tf.slice(input, [1, 0, 0], [1, 1, 3])
print(sess.run(data))
"""[1,0,0] It means that the first dimension is shifted 1
It is from [[[3, 3, 3], [4, 4, 4]],[[5, 5, 5], [6, 6, 6]]] Select data
Then choose the first one of the first dimensions , The first data in the second dimension , Three data of the third dimension """
# [[[3 3 3]]]
data = tf.slice(input, [1, 0, 0], [1, 2, 3])
print(sess.run(data))
# [[[3 3 3]
# [4 4 4]]]
data = tf.slice(input, [1, 0, 0], [2, 1, 3])
print(sess.run(data))
# [[[3 3 3]]
#
# [[5 5 5]]]
data = tf.slice(input, [1, 0, 0], [2, 2, 2])
print(sess.run(data))
# [[[3 3]
# [4 4]]
#
# [[5 5]
# [6 6]]]
""" Input parameters :
● input_: One Tensor.
● begin: One Tensor, The data type is int32 perhaps int64.
● size: One Tensor, The data type is int32 perhaps int64.
● name:( Optional ) Give this operation a name .
Output parameters :
● One Tensor, Data types and input_ identical ."""
Example 2 : Multiple applications and probability
prob = tf.constant([[1, 2], [3, 4], [5, 6]], dtype=tf.float32)
hit_prob = tf.slice(prob, [0, 0], [-1, 1])
sess = tf.Session()
print(sess.run(hit_prob))
[[ 1.]
[ 3.]
[ 5.]]
tf.square()
tf.math.square(
x,
name=None
)
function : Calculated element x The square of
Args:
x: A Tensor or SparseTensor. Must be one of the following types: half, float32, float64, int32, int64, complex64, complex128.
name: A name for the operation (optional).
prob = tf.constant([[1, 2], [3, 4], [5, 6]], dtype=tf.float32)
sess = tf.Session()
print(sess.run(tf.square(prob)))
[[ 1. 4.]
[ 9. 16.]
[ 25. 36.]]
tf.reduce_sum
reduce_sum(input_tensor, axis=None, keepdims=False, name=None)
For example:
x = tf.constant([[1, 1, 1], [1, 1, 1]])
tf.reduce_sum(x) # 6
tf.reduce_sum(x, 0) # [2, 2, 2]
tf.reduce_sum(x, 1) # [3, 3]
tf.reduce_sum(x, 1, keepdims=True) # [[3], [3]]
tf.reduce_sum(x, [0, 1]) # 6
Args:
input_tensor: The tensor to reduce. Should have numeric type.
axis: The dimensions to reduce. If None
(the default), reduces all
dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor))
.
keepdims: If true, retains reduced dimensions with length 1.
name: A name for the operation (optional).
Returns:
The reduced tensor, of the same dtype as the input_tensor.
tf.multiply()
The corresponding elements of the two matrices are multiplied by each other
Format : tf.multiply(x, y, name=None)
Parameters :
x: One type is :half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128 Tensor .
y: A type and tensor x The same tensor .
Return value : x * y element-wise.
Be careful :
(1)multiply This function implements element level multiplication , That is to say, two multiplying elements multiply each other , Not matrix multiplication , Pay attention to and tf.matmul difference .
(2) Two multiplicative numbers must have the same data type , Otherwise, it will report a mistake .
tf.matmul()
The matrix a Multiply by matrix b, Generate a * b.
Format : tf.matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None)
Parameters :
a: One type is float16, float32, float64, int32, complex64, complex128 And tensor rank > 1 Tensor .
b: A type and tensor a The same tensor .
transpose_a: If it is true , a Then transpose before multiplication .
transpose_b: If it is true , b Then transpose before multiplication .
adjoint_a: If it is true , a Then conjugate and transpose before multiplication calculation .
adjoint_b: If it is true , b Then conjugate and transpose before multiplication calculation .
a_is_sparse: If it is true , a It will be treated as a sparse matrix .
b_is_sparse: If it is true , b It will be treated as a sparse matrix .
name: The name of the operation ( Optional parameters )
Return value : A follow tensor a And tensor b A tensor of the same type and the innermost matrix is a and b The product of the corresponding matrix in .
Be careful :
(1) The input must be a matrix ( Or tensor rank >2 Tensor , A matrix that represents a batch ), And it has a matching matrix size after transposition .
(2) Both matrices must be of the same type , The types of support are as follows :float16, float32, float64, int32, complex64, complex128.
Cause errors :
ValueError: If transpose_a and adjoint_a, or transpose_b and adjoint_b Are set to true
tf.truediv
tf.truediv(
x,
y,
name=None
)
Parameter description
x: tensor . value type , As molecules .
y: tensor . value type , As denominator .
name: The name of the operation , Optional .
Return value :x/y, floating-point .

版权声明
本文为[Elementary school students in IT field]所创,转载请带上原文链接,感谢
边栏推荐
- C++ 数字、string和char*的转换
- C++学习——centos7上部署C++开发环境
- C++学习——一步步学会写Makefile
- C++学习——临时对象的产生与优化
- C++学习——对象的引用的用法
- C++编程经验(6):使用C++风格的类型转换
- Won the CKA + CKS certificate with the highest gold content in kubernetes in 31 days!
- C + + number, string and char * conversion
- C + + Learning -- capacity() and resize() in C + +
- C + + Learning -- about code performance optimization
猜你喜欢
-
C + + programming experience (6): using C + + style type conversion
-
Latest party and government work report ppt - Park ppt
-
在线身份证号码提取生日工具
-
Online ID number extraction birthday tool
-
️野指针?悬空指针?️ 一文带你搞懂!
-
Field pointer? Dangling pointer? This article will help you understand!
-
HCNA Routing&Switching之GVRP
-
GVRP of hcna Routing & Switching
-
Seq2Seq实现闲聊机器人
-
【闲聊机器人】seq2seq模型的原理
随机推荐
- LeetCode 91. 解码方法
- Seq2seq implements chat robot
- [chat robot] principle of seq2seq model
- Leetcode 91. Decoding method
- HCNA Routing&Switching之GVRP
- GVRP of hcna Routing & Switching
- HDU7016 Random Walk 2
- [Code+#1]Yazid 的新生舞会
- CF1548C The Three Little Pigs
- HDU7033 Typing Contest
- HDU7016 Random Walk 2
- [code + 1] Yazid's freshman ball
- CF1548C The Three Little Pigs
- HDU7033 Typing Contest
- Qt Creator 自动补齐变慢的解决
- HALCON 20.11:如何处理标定助手品质问题
- HALCON 20.11:标定助手使用注意事项
- Solution of QT creator's automatic replenishment slowing down
- Halcon 20.11: how to deal with the quality problem of calibration assistant
- Halcon 20.11: precautions for use of calibration assistant
- “十大科学技术问题”揭晓!|青年科学家50²论坛
- "Top ten scientific and technological issues" announced| Young scientists 50 ² forum
- 求反转链表
- Reverse linked list
- js的数据类型
- JS data type
- 记一次文件读写遇到的bug
- Remember the bug encountered in reading and writing a file
- 单例模式
- Singleton mode
- 在这个 N 多编程语言争霸的世界,C++ 究竟还有没有未来?
- In this world of N programming languages, is there a future for C + +?
- es6模板字符
- js Promise
- js 数组方法 回顾
- ES6 template characters
- js Promise
- JS array method review
- 【Golang】️走进 Go 语言️ 第一课 Hello World
- [golang] go into go language lesson 1 Hello World