其他分享
首页 > 其他分享> > PyTorch与张量简介

PyTorch与张量简介

作者:互联网

https://github.com/pytorch/pytorch

PyTorch is a Python package that provides two high-level features:

[365datascience] Tensors have been around for nearly 200 years. In fact, the first use of the word 'tensor' was introduced by William Hamilton. Interestingly, the meaning of this word had little to do with what we call tensors from 1898 until today.

How did tensors become important you may ask? Well, not without the help of one of the biggest names in science – Albert Einstein! Einstein [爱因斯坦] developed and formulated the whole theory of 'general relativity' [相对论] entirely in the language of tensors. Having done that, Einstein, while not a big fan of tensors himself, popularized tensor calculus [微积分] to more than anyone else could ever have.

Nowadays, we can argue that the word ‘tensor’ is still a bit ‘underground’. You won’t hear it in high school. In fact, your Math teacher may have never heard of it. However, state-of-the-art [最时新的] machine learning frameworks are doubling down [下赌注?] on tensors. The most prominent [惹人注目的] example being Google’s TensorFlow.

A scalar [标量] has the lowest dimensionality and is always 1x1. It can be thought of as a vector of length 1, or a 1x1 matrix. 如int i;

It is followed by a vector, where each element of that vector is a scalar. The dimensions of a vector are nothing but Mx1 or 1xM matrices. 如int ary[10]; 1行10列。int* ary[10]可以存10行1列,也可以存10行M列 - 把int row[M]当int* 来用。

Then we have matrices [矩阵], which are nothing more than a collection of vectors. The dimensions of a matrix are MxN. In other words, a matrix is a collection of n vectors of dimensions m by 1. Or, m vectors of dimensions n by 1. Furthermore, since scalars make up vectors, you can also think of a matrix as a collection of scalars, too.

Now, a tensor is the most general concept. Scalars, vectors, and matrices are all tensors of ranks 0, 1, and 2, respectively. Tensors are simply a generalization of the concepts we have seen so far. 如float tensor[X][Y][Z][T]

 

What is a a tape-based autograd system? [stackoverflow] There are different types of automatic differentiation [求导数] e.g. forward-mode, reverse-mode, hybrids [混合型]. The tape-based autograd in Pytorch simply refers to the uses of reverse-mode automatic differentiation, source. The reverse-mode auto diff is simply a technique used to compute gradients efficiently and it happens to be used by backpropagation, source.

You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.

More About PyTorch

CUDA is not supported on macOS.

包依赖 https://github.com/pytorch/pytorch#install-dependencies https://pypi.org

from sympy import * # pip install sympy
x, y, z = symbols('x y z')
print(simplify(sin(x)**2 + cos(x)**2))
print(expand((x+1) ** 3))
print(factor(x**3 + 1))
print(factor(x**33 + 1))
1
'''
x**3 + 3*x**2 + 3*x + 1
(x + 1)*(x**2 - x + 1)
(x + 1)*(x**2 - x + 1)*(x**10 - x**9 + x**8 - x**7 + x**6 - x**5 + x**4 - x**3 + x**2 - x + 1)*(x**20 + x**19 - x**17 - x**16 + x**14 + x**13 - x**11 - x**10 - x**9 + x**7 + x**6 - x**4 - x**3 + x + 1)
https://blog.csdn.net/shuangguo121/article/details/86611948
合并同类项对应的函数为collect; 分式化简函数的名称是cancel; 分式裂项函数的名称是apart,它能
将一个分式分解为几个分式的和、差; 由三角函数组成的表达式,可以使用trigsimp函数来化简; 要
展开三角函数,可以使用expand_trig函数; 若表达式中存在指数可以化解的情况,可以使用powsimp函数。
指数展开对应的函数为expand_power_exp,基底展开对应的函数为expand_power_base;

Imperative Experiences

PyTorch is designed to be intuitive [直觉], linear in thought, and easy to use. When you execute a line of code, it gets executed. There isn't an asynchronous view of the world. When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward. 就像是阻塞式while(recv)puts和异步I/O, event loop之间的差别,还有JavaScript的AJAX里套AJAX. Imperative: 祈使/命令语气: "干!" 事情就干了,不是不知道啥时候干完。并发 并行 进程 线程 协程 异步I/O python async - Fun_with_Words - 博客园 (cnblogs.com)

标签:10,Python,简介,vectors,张量,PyTorch,autograd,tensors
来源: https://www.cnblogs.com/funwithwords/p/15706651.html