编程语言
首页 > 编程语言> > Tensorflow学习笔记三——Tensorflow编程策略

Tensorflow学习笔记三——Tensorflow编程策略

作者:互联网

3.1 初实计算图与张量
1.tensorflow程序中的计算过程可以表示为一个计算图,或有向图。
3.2 计算图-tensorflow的计算模型
1.在tensorflow1.x中使用计算图

import tensorflow as tf
a=tf.constant([1.0,2.0],name="a")
b=tf.constant([3.0,4.0],name="b")
result=a+b
print(a.graph is tf.get_default_graph())
print(b.graph is tf.get_default_graph())

2.tensorflow会维护一个默认的计算图,并将定义的所有计算添加到默认的计算图中。
3.通过get_default_graph()可以获取对当前默认计算图的引用。
4.使用Graph()和as_default()函数。

import tensorflow as tf
g1=tf.Graph()
with g1.as_defalut():
    a=tf.get_variable("a",[2],initializer=tf.ones_initializer())
    b=tf.get_variable("b",[2],initializer=tf.zeros_initializer())
g2=tf.Graph()
with g2.as_default():
    a=tf.get_variable("a",[2],initializer=tf.zeros_initializer())
    b=tf.get_variable("b",[2],initializer=tf.ones_initializer())
with tf.Session(graph=g1) as sess:
    tf.global_variables_initializer().run()
    with tf.variable_scope("",reuse=True):
        print(sess.run(tf.get_variable("a")))
        print(sess.run(tf.get_variable("b")))
with tf.Session(graph=g2) as sess:
    tf.global_variables_initializer().run()
    with tf.variable_scop("",reuse=True):
        print(sess.run(tf.get_variable("a")))
        print(sess.run(tf.get_variable("b")))

5.TensorFLow维护的默认集合及其内容

6.函数add_to_collection()可以将个体加入一个或多个集合中,而get_collection()函数用来获取一个集合中的所有个体。
7.AutoGraph功能
(1)graph_execution

import tensorflow as tf
a=tf.constant([1.0,2.0],name="a")
b=tf.constant([3.0,4.0],name="b")
result=a+b
print(a)
print(b)
print(result)
with tf.Session() as sess:
    print(sess.run(result))

使用run函数执行后得到result结果。
(2)eager_execution
使用enable_eager_execution()函数将程序执行在Eager Execution模式下。可以在没有会话的情况下执行相关程序。

import tensorflow as tf
import tensorflow.contrib.eager as tfe
tfe.enable_eager_execution()
a=tf.constant([1.0,2.0],name="a")
b=tf.constant([3.0,4.0],name="b")
result=a+b
print(a)
print(b)
print(result)

(3)autoGraph
对于被convert()函数装饰的函数,autograph将自动为期生成计算图。
装饰器就是将被装饰函数包裹了一层,放在另一个函数(元函数)中实现。

import tensorflow as tf
from tensorflow.contirb import autograph as ag
@ag.convert()
def init_g1_var():
    a=tf.get_variable("a",[2],initializer=tf.ones_initializer())
    b=tf.get_variable("b",[2],initializer=tf.zeros_initializer())
@ag.convert()
def init_g2_var():
    a=tf.get_variable("a",[2],initializer=tf.zeros_initializer())
    b=tf.get_variable("b",[2],initializer=tf.ones_initializer())
g1=tf.Graph()
with g1.as_defalut():
    init_g1_var()
    with tf.Session() as sess:
        tf.global_variables_initializer().run()
        with tf.variable_scope("",resue=True):
            print(sess.run(tf.get_variable("a")))
            print(sess.run(tf.get_variable("b")))
g2=tf.Graph()
with g2.as_defalut():
    init_g2_var()
    with tf.Session() as sess:
        tf.global_variables_initializer().run()
        with tf.variable_scope("",reuse=True):
            print(sess.run(tf.get_variable("a")))
            print(sess.run(tf.get_variable("b")))

(4)Tensorflow2.0对autograph的支持
Tensorflow2.0中不包含contrib模块,直接使用function()装饰函数就可以实现AutoGraph功能,且默认执行模式为eager_execution()

import tensorflow as tf
@tf.function
def simple_matmul(x,y):
    return tf.matmul(x,y)
x=tf.Variable(tf.random.uniform((4,4)))
y=tf.Variable(tf.random.uniform((4,4)))
print(simple_matmul(x,y))

import tensorflow as tf
def simple_mat_mul(x,y):
    return tf.matmul(x,y)
def simple_mat_add(x,y):
    return tf.add(x,y)
@tf.function
def simple_mat_opt(x,y):
    if x==y:
        print("x == y")
        return simple_mat_add(x,y)
    else:
        print("x != y")
        return aimple_mat_mul(x,y)
x=tf.Variable(tf.random.uniform((4,4)))
y=tf.Variable(tf.random.uniform((4,4)))
print(simple_mat_opt(x,y))
print(tf.autograph.to_code()simple_mat_op.python_function))

3.3张量-TensorFlow的数据模型
1.张量,不同维度的数组,0阶张量Scalar,1阶张量Vector,n阶张量
2.张量保存的是运算结果属性,而非真正的数字
op:操作属性,node:src_output
shape:张量大小
dtype:数据类型
3.tensorflow支持的数据类型

单机模型3.在Tensorflow 1.x中使用会话
(1)典型会话使用模式

sess = tf.Session()
#Session construction : __init__(self,target,graph,config)
sess.run(result)//result=a+b
#run(self,fetches,feed_dict,options,run_metadata)
sess.close()

(2)with/as会话使用模式

with expression [as variable]:
	with-block
expression的结果是支持环境协议的对象。

with tf.Session() as sess:
    sess.run(...)
• 执行expression一般会返回一个对象,所得到的对象陈伟环境管理器,必须有__enter__和__exit__()方法
• 环境管理器的__enter__()方法会被调用,如果存在as语句,其返回值会赋给as字句中的variable,否则会丢弃这个返回值
• 执行with_block代码中的内容
• 如果with_block()代码块中产生异常,__exit__(type,value,traceback)方法就会被调用。
• 如果with_block()代码坏中不产生异常,__exit__()方法会议参数只type,value,traceback都为NONE的情况执行传递。

(3)Interactive-Session类

sess=tf.Session()
	with sess.as_default():
with_block()
sess=tf.InteractiveSession()

#InteractiveSession类构造函数原型:__init__(self,target,graph,config)

(4)Tensor.eval()函数可以计算一个张量的取值

sess=tf.Session()
with sess.as_default():
    print(result.eval())
    #sess is not default session 
    #print(sess.run(result))
    #print(result.eval(session=sess)

4配置Session的参数
使用configProto()配置Session。

sess1=tf.Session(config=config)
sess2=tf.InteractiveSession(config=config)

5 placeholder机制-在会话运行时动态提供输入数据。

placeholder指定了一个位置,这个位置上的数据在程序运行时再指定。

import tensorflow as tf
#placeholder(dtype,shape,name)
a=tf.placeholder(tf.float32,shape=(2),name="input")
b=tf.placeholder(tf.float32,shape=(2),name="input")
result=a+b
with tf.Session() as sess:
    sess.run(result,feed_dict={a:[1.0,2.0],b:[3.0,4.0]})
    print(result)
import tensorflow as tf
#placeholder(dtype,shape,name)
a=tf.placeholder(tf.float32,shape=(2),name="input")
b=tf.placeholder(tf.float32,shape=(4,2),name="input")
result=a+b
with tf.Session() as sess:
    print(sess.run(result,feed_dict={a:[1.0,2.0],b:[[3.0,4.0],[5.0,6.0],[7.0,8.0],[9.0,10.0]]}))

6 TensorFlow2.0取消了会话

#Session MODE
import tensorflow as tf
x1=tf.placeholder(dtype=tf.float32,shape=(2))
x2=tf.placeholder(dtype=tf.float32,shape=(2))
def forward(x):
    with tf.variable_scope("matmul",reuse=tf.AUTO_REUSE):
        W=tf.get_variable("W",initializer=tf.ones(shape=(2,2)),
        regularizer=tf.contrib.layers.12_regularizer(0.04))
        b=tf.get_variable("b",initializer=tf.zeros(shape=(2)))
        return W=x*b
out_1=forward(x1)
out_2=forward(x2)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    print(sess.run([out_1,out_2],feed_dict={x1:[1,2],x2:[3,4]}))
#AutoGraph MODE
import tensorflow as tf
W=tf.Variable(tf.ones(shape=(2,2)),name="W")
b=tf.Variable(tf.zeros(shape=(2)),name="b")
@tf.function
def forward(x):
    return W*x+b
out_1=forward([1,2])
out_2=forward([3,4])
print(out_1)
print(out_2)

7 一个小模型

#Session MODE
import tensorflow as tf
def simple_model(x,training,scope='model'):
    with tf.variable_scope(scope,result=tf.AUTO_REUSE):
        #x-input,the first layer is conver layer--conv2d
        x=tf.layers.conv2d(x,64,3,activation=tf.nn.relu,
        kernal_regularizer=tf.contrib.layers.12_regularizer(0.04))
        #the second layer is the maximum pool layer--max_pooling2d
        x=tf.layers.max_pooling2d(x,(2,2),1)
        #flatten() --downsample
        x=tf.layers.flatten(x)
        #dropout()--forbid overfit-dropout some nutrul cell randomly
        x=tf.layers.dropout(x,0.1,training=training)
        #dense()--fully_connect layer
        x=tf.layers.dense(x,64,activation=tf.nn.relu)
        #batch_normalization()--
        x=tf.layers.batch_normalization(x,training=training)
        x=tf.layers.dense(x,10,activation=tf.nn.softmax)
        return x
train_out=simple_model(train_data,training=True)
test_out=simple_model(test_data,training=False)
#AutoGraph MODE tensorflow.keras
import tensorflow as tf
simple_model=tf.keras.Sequential([
tf.keras.layer.conv2d(32,3,activation='relu',
kernal_regularizer=tf.keras.regualarizers.12(0.04),
input_shape=(28,28,1)),
tf.keras.layers.max_pooling2d((2,2),1),
tf.keras.layers.flatten(),
tf.keras.layers.dropout(0.1),
tf.keras.layers.dense(64,activation='rule'),
tf.keras.layers.batch_normalization(),
tf.keras.layers.dense(10,activation='softmax')
])

```python
simple_model=tf.keras.Sequential()
model.add(tf.keras.layers.conv2d(32,3,activation='relu',
kernal_regularizer=tf.keras.regualarizers.12(0.04),
input_shape=(28,28,1)))
model.add(tf.keras.layers.max_pooling2d((2,2),1))
model.add(tf.keras.layers.flatten())
model.add(tf.keras.layers.dropout())
model.add(tf.keras.layers.dense(62,activation='relu'))
model.add(tf.keras.layers.batch_normalization())
model.add(tf.keras.dense(10,activation='softmax'))

3.5 TensorFlow变量
1.创建变量
(1)随机数

weights=tf.Variable(tf.random.normal([3,4],stddev=1))


def __init__(self,initial_value-None,trainable=None,
validate_shape=True,caching_device=None,name=None,
variable_def=None,dtype=None,import_scope=None,
constraint=None,
synchronization=tf.VariableSynchronization.AUTO,
aggregation=tf.compat.vl.VariableAggregation.NONE,
shape=None)

(2)函数名称
随机数分布
主要参数

normal(shape,mean=0.0,stddev=1.0,dtype=tf.dtypes.float32,seed=None,name=None)
poisson(shape,lam,dtype=tf.dtypes.float32,seed=None,name=None)
uniform(shape,minval=0,maxval=None,dtype=tf.dtypes.float32,seed=None,name=None)
gamma(shape,alpha,beta=None,dtype=tf.dtypes.float32,seed=None,name=None)

(2)常数
函数名称 功能

biases=tf.Variable(tf.zeros([3]))
b1=tf.Variable(biases.initial_value())
b2=tf.Variable(biases.initial_value()*3.0)

2.在Tensorflow1.x中初始化变量

import tensorflow as tf
x=tf.constant([[1.0,2.0]])
w1=tf.Variable(tf.random_normal([2,3],stddev=1,seed=1))
w2=tf.Variable(tf.random_normal([3,1],stddev=1,seed=1))
a=tf.matmul(x,w1)
b=tf.matmul(a,w2)
init_op = tf.initialize_all_variables()
with tf.Session() as sess:
    sess.run(init_op)
    print(sess.run(y))

3.变量与张量
一个变量一旦创建,类型就不可以更改了,但是可以改变形状[validate_shape=False]
4.管理变量的变量空间
(1)用get_variable()函数创建变量

a=tf.Variable(tf.constant(1.0,shape=[1],name="a"))
a=tf.get_variable("a",shape=[1],initializer=tf.constant_initializer(1.0))

初始化函数 功能

(2)variable_scop()与name_scope()

import tensorflow as tf
with tf.variable_scop("one"):
    a=tf.get_variable("a",[1],initializer=tf.constant_initializer(1.0))
with tf.variable_scop("one",reuse=True):
    a2=tf.get_variable("a",[1])
    print(a.name,a2.name)

使用variable_scop()函数会影响get_variable()函数的使用,需要设置上下文管理器reuse=True。
如果使用默认的reuse=False创建变量空间,那么get_variable()函数将创建新的变量,如果name属性相同的变量已经存在,get_variable()函数将报错。

import tensorflow as tf
a=tf.get_variable("a",[1],initializer=tf.costant_initializer(1.0))
print(a.name)
with tf.variable_scop("one"):
    a2=tf.et_variable("a",[1],initializer=tf.constant_initializer(1.0))
    print(a2.name)
with tf.variable_scop("one"):
    with tf.variable_scop("two"):
        a4=tf.et_variable("a",[1])
        print(a4.name)
    b=tf.get_variable("b",[1])
    print(b.name)
    
with tf.variable_scop("",reuse=True):
    a5=tf.get_variable("one/two/a",[1])
    print(a5==a4)

(3)Tensorflow2.0中搭建网络

import tensorflow as tf
with tf.variable_scop("one"):
    a=tf.get_variable("var1",[1])
    print(a.name)
with tf.variable_scop("two"):
    b=tf.get_variable("var2",[1])
    print(b.name)
with tf.name_scop("a"):
    a=tf.Variable([1],name="a")
    print(a.name)
    a=tf.get_variable("b",[1])
    print(a.name)
    
with tf.name_scop("b"):

name_scope()内部使用get_variable()函数时。生成的变量名称不会被添加变量空间名称前缀
Variable类在name_scope()中会被添加变量空间名称前缀

使用variable_scope()变量空间时,无论时get_variable()函数还是Variable类都会在生成的变量名称前添加变量空间名称前缀
name_scope()没有reuse这个参数。

标签:name,get,编程,print,笔记,initializer,variable,tf,Tensorflow
来源: https://blog.csdn.net/weixin_43941538/article/details/112675384