编程语言
首页 > 编程语言> > Python_DL_Keras&Tensorflow

Python_DL_Keras&Tensorflow

作者:互联网

 

Karea: https://keras.io/

莫烦keras:https://www.bilibili.com/video/BV1TW411Y7HU?from=search&seid=333955059060890767

Keras&Tensorflow: https://space.bilibili.com/6001266/video?tid=36&keyword=&order=pubdate

吴恩达:https://space.bilibili.com/46880349 

    https://space.bilibili.com/46880349/channel/index

 

 

Keras基础入门教程 深度学习框架

https://space.bilibili.com/6001266/video?tid=36&keyword=&order=pubdate

5. Linear Regression

Keras,加入全连接得时候默认是线性的。

import keras
import numpy as np
import matplotlib.pyplot as plt
# 按顺序构成得模型
from keras.models import Sequential
# Dense全连接层
from keras.layers import Dense

# 使用怒骂朋友生成100个随机点
x_data = np.random.rand(100)
noise = np.random.normal(0,0.01,x_data.shape)
y_data = x_data*0.1 + 0.2 + noise


# 构建一个顺序模型
model = Sequential()
# 在模型中添加一个全连接层
model.add(Dense(units=1,input_dim=1)) # units:dimensionality of the output space. while input_dim:dimensionality of the input space.
model.compile(optimizer='sgd', loss='mse')

# 训练3001个批次
for step in range(3001):
    # 每次训练一个批次
    cost = model.train_on_batch(x_data,y_data)
    # 每500个batch,打印一次cost
    if step % 500 == 0:
        print("cost:", cost)

# 打印权值和偏置值
W, b = model.layers[0].get_weights()
print('W:',W, 'b:',b)

# x_data输入网络中,得到预测值y_pred
y_pred = model.predict(x_data)

# 显示随机点
plt.scatter(x_data,y_data)
# 显示预测结果
plt.plot(x_data,y_pred,'r--',)
plt.show()
Linear Regression

 

6. Nonlinear Regression

Keras,要加入非线性激励,要在模型中加入非线性函数。 加入activation有两种做法:

from keras.layers import Dense,Activation

#### method 1
# 在模型中添加一个全连接层,默认情况下是没有激活函数,即线性的,得加入activtion来指定非线性函数。
# 模型: 1-10-1
# units:dimensionality of the output space. while input_dim:dimensionality of the input space.
model.add(Dense(units=10,input_dim=1)) 
model.add(Activation('tanh'))
# 可以加input_dim,也可以不加。系统默认识别上一层得结构。
model.add(Dense(units=1)) 
model.add(Activation('tanh'))
Activation Method 1
model.add(Dense(units=10,input_dim=1,activation='tanh')) 
model.add(Dense(units=1,activation='tanh')) 
Activation Method 2

Nonlinear Regression_01

import numpy as np
import matplotlib.pylab as plt
# 按顺序构成得模型
from keras.models import Sequential
# Dense全连接层
from keras.layers import Dense,Activation
from keras.optimizers import SGD

# 使用numpy生成200个随机点
x_data = np.linspace(-0.5, 0.5,200)
noise = np.random.normal(0,0.02,x_data.shape)
y_data = np.square(x_data) + noise


# 构建一个顺序模型
model = Sequential()

'''
#### method 1
# 在模型中添加一个全连接层,默认情况下是没有激活函数,即线性的,得加入activtion来指定非线性函数。
# 模型: 1-10-1
# units:dimensionality of the output space. while input_dim:dimensionality of the input space.
model.add(Dense(units=10,input_dim=1)) 
model.add(Activation('tanh'))
# 可以加input_dim,也可以不加。系统默认识别上一层得结构。
model.add(Dense(units=1)) 
model.add(Activation('tanh'))
'''

#### method 2
model.add(Dense(units=10,input_dim=1,activation='tanh')) 
model.add(Dense(units=1,activation='tanh')) 

# sgd得学习率默认是0.01,如果学习率很小得话,用来迭代得次数就很多。我们可以用SGD函数来自己定义。
sgd = SGD(learning_rate=0.3,)
model.compile(optimizer=sgd, loss='mse')

# 训练3001个批次
for step in range(3001):
    # 每次训练一个批次
    cost = model.train_on_batch(x_data,y_data)
    # 每500个batch,打印一次cost
    if step % 500 == 0:
        print("cost:", cost)

# 打印权值和偏置值
W, b = model.layers[0].get_weights()
print('W:',W, 'b:',b)

# x_data输入网络中,得到预测值y_pred
y_pred = model.predict(x_data)

# 显示随机点
plt.scatter(x_data,y_data)
# 显示预测结果
plt.plot(x_data,y_pred,'r--',)
plt.show()
Nonlinear Regression 

 

7. MNIST数据集及Softmax

softmax函数也是激活函数,它一般用于分类,我们做分类问题,一般会用到softmax,并把它放到神经网络的最后一层,这样可以把神经网络的输出转化为概率值

 

 

8. MNIST分类程序   

mnist分类,使用了model.fit 来训练模型,而不是for loop.还是用了model.evaluate 来评估模型。

import numpy as np
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD

# 载入数据
(x_train,y_train),(x_test,y_test) = mnist.load_data()
# 将原来的矩阵(60000,28,28)转换成(60000,784)
# -1为任意值,它会自动帮你选取一个合适的值。 /255是做归一化的处理。
x_train = x_train.reshape(x_train.shape[0],-1)/255.0
x_test = x_test.reshape(x_test.shape[0],-1)/255.0

# 换one_hot格式
y_train = np_utils.to_categorical(y_train,num_classes=10)
y_test = np_utils.to_categorical(y_test,num_classes=10)

# 创建神经网络,输入784个神经元,输出10个神经元
model = Sequential([
        Dense(input_dim=784,units=10,bias_initializer="one",activation='softmax')
    ])

# 定义opertimizer,loss function,accuracy
sgd = SGD(learning_rate=0.2)
model.compile(optimizer=sgd, loss='mse',metrics=['accuracy'])

# batch_size代表图片的批次,而epochs代表迭代的周期,一个迭代周期是全部图片。
model.fit(x_train,y_train,batch_size=32,epochs=10)

# 评估模型
loss, accuracy = model.evaluate(x_test,y_test)

print('\ntest loss', loss)
print('accuracy', accuracy)
MNIST 分类

Output:W[0]的下降速度在0.04,准确率从0.76开始。

%run Keras/2_mnist_crossentropy.py
Epoch 1/10
60000/60000 [==============================] - 2s 38us/step - loss: 0.0381 - accuracy: 0.7694
Epoch 2/10
60000/60000 [==============================] - 2s 37us/step - loss: 0.0204 - accuracy: 0.8808
Epoch 3/10
60000/60000 [==============================] - 2s 41us/step - loss: 0.0177 - accuracy: 0.8931
Epoch 4/10
60000/60000 [==============================] - 2s 36us/step - loss: 0.0165 - accuracy: 0.8991
Epoch 5/10
60000/60000 [==============================] - 2s 39us/step - loss: 0.0156 - accuracy: 0.9033
Epoch 6/10
60000/60000 [==============================] - 2s 40us/step - loss: 0.0151 - accuracy: 0.9065
Epoch 7/10
60000/60000 [==============================] - 3s 42us/step - loss: 0.0146 - accuracy: 0.9086
Epoch 8/10
60000/60000 [==============================] - 2s 39us/step - loss: 0.0143 - accuracy: 0.9110
Epoch 9/10
60000/60000 [==============================] - 2s 39us/step - loss: 0.0140 - accuracy: 0.9124
Epoch 10/10
60000/60000 [==============================] - 2s 41us/step - loss: 0.0137 - accuracy: 0.9139
10000/10000 [==============================] - 0s 26us/step

W[0]: [-0.03793796 -0.07099154  0.012307    0.00737928  0.04566661  0.07897843  -0.02173948  0.01890512  0.03027461 -0.05212684] 
b: [0.8575443 1.2337052 1.0053563 0.8201035 1.093483  1.5314213 1.0139054  1.2782687 0.2994195 0.8668173]

test loss 0.01302071159919724
accuracy 0.917900025844574
Output

 

 

9. 交叉熵

Sigmoid function作为激励函数时,y趋于0和1时,它梯度下降的速度很慢,而y趋于0.5时,它的梯度下降最快。这是由于二次代价函数的梯度下降速度与它激励函数的导数(σ'(z))有关,即sigmoid的导数有关

 

 

 

对于交叉熵代价函数,它的权重值和偏置值得调整与激活函数的导数(σ'(z))无关,而二次代价函数的权值和偏值得调整与激活函数的导数有关,交叉熵的梯度公式是σ(z)-y,这样可以使得我们离目标的越远,即误差越大,参数w和b的调整就越快,训练的速度也就越快

如果神经元是线性的,那么二次代价函数就是合适的选择,如果输出神经元是S型函数(如 sigmoid,than,relu等),那么比较适合用交叉熵代价函数。

 

 

 

 

输出层神经元是sigmoid函数,可以采用交叉熵代价函数, 而softmax长作为最后一层,其回归的代价函数是对数似然代价函数。

 

 

cross-entropy + Sigmoid:它的输出可以看出它的头几次的下降速度比二次代价函数快一点。

import numpy as np
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD

# 载入数据
(x_train,y_train),(x_test,y_test) = mnist.load_data()
# 将原来的矩阵(60000,28,28)转换成(60000,784)
# -1为任意值,它会自动帮你选取一个合适的值。 /255是做归一化的处理。
x_train = x_train.reshape(x_train.shape[0],-1)/255.0
x_test = x_test.reshape(x_test.shape[0],-1)/255.0

# 换one_hot格式
y_train = np_utils.to_categorical(y_train,num_classes=10)
y_test = np_utils.to_categorical(y_test,num_classes=10)

# 创建神经网络,输入784个神经元,输出10个神经元
model = Sequential([
        Dense(input_dim=784,units=10,bias_initializer="one",activation='softmax')
    ])

# 定义opertimizer,loss function,accuracy
sgd = SGD(learning_rate=0.2)
model.compile(optimizer=sgd, loss='categorical_crossentropy',metrics=['accuracy'])

# batch_size代表图片的批次,而epochs代表迭代的周期,一个迭代周期是全部图片。
model.fit(x_train,y_train,batch_size=32,epochs=10)

# 评估模型
loss, accuracy = model.evaluate(x_test,y_test)

print('\ntest loss', loss)
print('accuracy', accuracy)
cross-entropy&Sigmoid

Output:W的下降速度为0.1,而准确率从0.89开始,明显比二次代价的下降速度和收敛速度快。

Epoch 1/10
60000/60000 [==============================] - 3s 55us/step - loss: 0.3771 - accuracy: 0.8939
Epoch 2/10
60000/60000 [==============================] - 4s 62us/step - loss: 0.3031 - accuracy: 0.9141
Epoch 3/10
60000/60000 [==============================] - 3s 56us/step - loss: 0.2904 - accuracy: 0.9177
Epoch 4/10
60000/60000 [==============================] - 3s 57us/step - loss: 0.2833 - accuracy: 0.9209
Epoch 5/10
60000/60000 [==============================] - 2s 41us/step - loss: 0.2781 - accuracy: 0.9219
Epoch 6/10
60000/60000 [==============================] - 3s 42us/step - loss: 0.2747 - accuracy: 0.9227
Epoch 7/10
60000/60000 [==============================] - 2s 39us/step - loss: 0.2711 - accuracy: 0.9242
Epoch 8/10
60000/60000 [==============================] - 2s 36us/step - loss: 0.2683 - accuracy: 0.9254
Epoch 9/10
60000/60000 [==============================] - 2s 38us/step - loss: 0.2674 - accuracy: 0.9250
Epoch 10/10
60000/60000 [==============================] - 2s 36us/step - loss: 0.2653 - accuracy: 0.9265
10000/10000 [==============================] - 0s 25us/step

W[0]: [-0.06596227  0.05133935 -0.03020713  0.05072761 -0.01564101  0.062588   0.05685071 -0.05082045 -0.01905373  0.02436206] 
b: [ 0.0500211   1.7270411   1.2487147   0.43417516  1.051346    3.2956278
  0.692315    2.344586   -1.3288382   0.48500693]

test loss 0.28017732598781586
accuracy 0.9203000068664551
Output

 

 

 

 

 

 

 

 

  

 

标签:10,DL,loss,Python,step,60000,Tensorflow,model,accuracy
来源: https://www.cnblogs.com/tlfox2006/p/13269805.html