其他分享
首页 > 其他分享> > 强化学习深度解析之贝尔曼方程(一)

强化学习深度解析之贝尔曼方程(一)

作者:互联网

强化学习

强化学习注重智能体(agent)与环境之间的交互式学习:

马尔可夫决策过程

通常使用马尔可夫决策过程(Markov Decision Process,MDP)来描述强化学习问题。马尔可夫决策过程过程定义为一个六元组 M = ( S , A , P , r , ρ 0 , γ ) M=(\mathcal{S},\mathcal{A},P,r,\rho_0,\gamma) M=(S,A,P,r,ρ0​,γ):

不同设定的马尔可夫决策过程,会影响到强化学习算法设计以及理论分析。马尔可夫决策过程有一个重要性质——马尔可夫性质 P ( s t ∣ s 0 , a 0 , s 1 , a 1 , ⋯   , s t − 1 , a t − 1 ) = P ( s t ∣ s t − 1 , a t − 1 ) P(s_t|s_0,a_0,s_1,a_1,\cdots,s_{t-1},a_{t_-1})=P(s_t|s_{t-1},a_{t-1}) P(st​∣s0​,a0​,s1​,a1​,⋯,st−1​,at−​1​)=P(st​∣st−1​,at−1​)表示下一个时间步的状态只受到当前状态和动作的影响。马尔可夫性质简化了交互过程中状态动作之间的相互影响。

如上图所示是智能体与环境之间交互的示意图,首先从初始状态分布中产生一个初始状态 s 0 s_0 s0​。然后智能体采用策略 π \pi π产生当前状态对应的动作 a 0 a_0 a0​,并在环境中执行动作 a 0 a_0 a0​。环境将根据智能体的行为产生对应的奖励 r 0 r_0 r0​和下一个状态 s 1 s_1 s1​。重复进行这个交互过程,智能体就可以得到一系列状态,动作和奖励, { s 0 , a 0 , r 0 , s 1 , a 1 , r 1 , ⋯   } \{s_0,a_0,r_0,s_1,a_1,r_1,\cdots\} {s0​,a0​,r0​,s1​,a1​,r1​,⋯},称为当前策略的一个轨迹(trajectory)。这个轨迹可以拆分成一个一个的 ( s t , a t , r t , s t + 1 ) (s_t,a_t,r_t,s_{t+1}) (st​,at​,rt​,st+1​),将其称为状态转移样本。如果这个交互过程是周期的,那么智能体与环境交互一定的时间步之后,整个交互过程会重置。如果这个交互过程是无限期的,那么智能体与环境可以一直交互下去,直到触发终止条件。

强化学习的优化目标

强化学习考虑智能体与环境交互产生的长期累积奖励。具体来说,对于每条轨迹,如果给未来的奖励加上折扣系数 γ \gamma γ,可以计算一个累积奖励 G t G_t Gt​ G t = ∑ i = t T γ i − t r i = r t + γ ⋅ r t + 1 + γ 2 ⋅ r t + 2 + ⋯ + γ T − t ⋅ r T G_t = \sum\limits_{i=t}^T \gamma^{i-t} r_i=r_t+\gamma \cdot r_{t+1} + \gamma^2 \cdot r_{t+2}+ \cdots + \gamma^{T-t}\cdot r_{T} Gt​=i=t∑T​γi−tri​=rt​+γ⋅rt+1​+γ2⋅rt+2​+⋯+γT−t⋅rT​如果智能体和环境的交互是无限的,即 T → ∞ T \rightarrow \infty T→∞,此时累计奖励 G t G_t Gt​的计算公式为 G t = ∑ i = t ∞ γ i − t r i = r t + γ ⋅ r t + 1 + γ 2 ⋅ r t + 2 + γ 3 ⋅ r t + 3 + ⋯ G_t = \sum\limits_{i=t}^\infty \gamma^{i-t} r_i=r_t+\gamma \cdot r_{t+1} + \gamma^2 \cdot r_{t+2}+ \gamma^3 \cdot r_{t+3} +\cdots Gt​=i=t∑∞​γi−tri​=rt​+γ⋅rt+1​+γ2⋅rt+2​+γ3⋅rt+3​+⋯折扣系数通常在0到1之间,从而使得以上优化目标在无穷时间步的情况下具有数学意义的上界,便于研究分析。折扣系数越趋近于0,优化目标越注重短期收益;越趋近于1,优化目标越注重长期累积收益。如果有两个策略,一个策略在初始步中具有短暂非常高的奖励,但之后得分较低;另一个策略在整个过程中都保持着比较稳定的奖励。这两个策略长期累积奖励是一样的。平均累积奖励将无法区分这两种策略。折扣累积奖励在折扣系数趋于0时将更加偏好第一个策略。强化学习领域通常使用折扣累积奖励作为优化目标。

值函数和贝尔曼方程

值函数(value funciton)用来描述当前状态对应的预期累积奖励。值函数分为两种,状态值函数(state value function)和状态动作值函数(state-action value functon), 区别在于前者将状态映射到未来期望累积奖励,即 S → R \mathcal{S}\rightarrow \mathbb{R} S→R;后者将状态动作映射到未来期望累积奖励,即 S × A → R \mathcal{S}\times \mathcal{A}\rightarrow \mathbb{R} S×A→R。
给定马尔可夫决策过程 M M M和策略 π \pi π,状态价值函数定义如下, V π ( s ) = E s t , a t , s t + 1 , ⋯   , ∼ π [ G t ∣ s t = s ] = E s t , a t , s t + 1 , ⋯   , ∼ π [ ∑ i = t ∞ γ i − t r i ∣ s t = s ] = E s t , a t , s t + 1 , ⋯   , ∼ π [ r t + γ ∑ i = t + 1 ∞ γ i − ( t + 1 ) r i ∣ s t = s ] = E s t , a t , s t + 1 , ⋯   , ∼ π [ r t + γ G t + 1 ∣ s t = s ] \begin{aligned}V_{\pi}(s)&=\mathbb{E}_{s_t,a_t,s_{t+1},\cdots,\sim \pi}\left[G_t|s_t=s\right]\\&=\mathbb{E}_{s_t,a_t,s_{t+1},\cdots,\sim \pi}\left[\left.\sum\limits_{i=t}^{\infty}\gamma^{i-t}r_i \right|s_t=s \right]\\ &=\mathbb{E}_{s_t,a_t,s_{t+1},\cdots,\sim \pi}\left[\left. r_t+\gamma \sum\limits_{i=t+1}^{\infty}\gamma^{i-(t+1)}r_i \right|s_t =s\right]\\&=\mathbb{E}_{s_t,a_t,s_{t+1},\cdots,\sim \pi}\left[r_t+\gamma G_{t+1}|s_t=s\right]\end{aligned} Vπ​(s)​=Est​,at​,st+1​,⋯,∼π​[Gt​∣st​=s]=Est​,at​,st+1​,⋯,∼π​[i=t∑∞​γi−tri​∣∣∣∣∣​st​=s]=Est​,at​,st+1​,⋯,∼π​[rt​+γi=t+1∑∞​γi−(t+1)ri​∣∣∣∣∣​st​=s]=Est​,at​,st+1​,⋯,∼π​[rt​+γGt+1​∣st​=s]​其中, s t , a t , s t + 1 , ⋯   , ∼ π s_t,a_t,s_{t+1},\cdots,\sim \pi st​,at​,st+1​,⋯,∼π表示轨迹来自策略 π \pi π与环境的交互。上式表示从状态 s t = s s_t=s st​=s出发,智能体使用策略 π \pi π与环境交互得到的期望累积奖励。
类似地,可以定义状态动作价值函数 Q π ( s , a ) = E s t , a t , s t + 1 , a t + 1 , s t + 2 , ⋯   , ∼ π [ G t ∣ s t = s , a t = a ] = E s t + 1 , a t + 1 , s t + 2 , ⋯   , ∼ π [ G t ∣ s t = s , a t = a ] = E s t + 1 , a t + 1 , s t + 2 , ⋯   , ∼ π [ ∑ i = t ∞ γ i − t r i ∣ s t = s , a t = a ] = E s t + 1 , a t + 1 , s t + 2 , ⋯   , ∼ π [ r t + γ ∑ i = t + 1 ∞ γ i − ( t + 1 ) r i ∣ s t = s , a t = a ] = E s t + 1 , a t + 1 , s t + 2 , ⋯   , ∼ π [ r t + γ G t + 1 ∣ s t = s , a t = a ] \begin{aligned}Q_{\pi}(s,a)&=\mathbb{E}_{s_t,a_t,s_{t+1},a_{t+1},s_{t+2},\cdots,\sim \pi}\left[G_t|s_t=s,a_t=a\right]\\&=\mathbb{E}_{s_{t+1},a_{t+1},s_{t+2},\cdots,\sim \pi}\left[G_t|s_t=s,a_t=a\right]\\&=\mathbb{E}_{s_{t+1},a_{t+1},s_{t+2},\cdots,\sim \pi}\left[\left.\sum\limits_{i=t}^{\infty}\gamma^{i-t}r_i\right|s_t=s,a_t=a\right]\\&=\mathbb{E}_{s_{t+1},a_{t+1},s_{t+2},\cdots,\sim \pi}\left[\left. r_t+\gamma \sum\limits_{i=t+1}^{\infty}\gamma^{i-(t+1)}r_i \right|s_t=s,a_t=a \right]\\&=\mathbb{E}_{s_{t+1},a_{t+1},s_{t+2},\cdots,\sim \pi}\left[r_t+\gamma G_{t+1}|s_t=s,a_t=a\right]\end{aligned} Qπ​(s,a)​=Est​,at​,st+1​,at+1​,st+2​,⋯,∼π​[Gt​∣st​=s,at​=a]=Est+1​,at+1​,st+2​,⋯,∼π​[Gt​∣st​=s,at​=a]=Est+1​,at+1​,st+2​,⋯,∼π​[i=t∑∞​γi−tri​∣∣∣∣∣​st​=s,at​=a]=Est+1​,at+1​,st+2​,⋯,∼π​[rt​+γi=t+1∑∞​γi−(t+1)ri​∣∣∣∣∣​st​=s,at​=a]=Est+1​,at+1​,st+2​,⋯,∼π​[rt​+γGt+1​∣st​=s,at​=a]​表示从状态 s t s_t st​出发,执行动作 a t a_t at​后,智能体使用策略 π \pi π与交互得到的期望累积奖励。
贝尔曼方程(Bellman equation)是一个动态规划方程。它是强化学习算法中的核心等式,构建了状态转移前后值函数之间的递归关系。对于任意概率性策略 π \pi π而言,状态价值的贝尔曼期望方程为: V π ( s ) = E s t , a t , s t + 1 , ⋯   , ∼ π [ G t ∣ s t = s ] = E s t , a t , s t + 1 , ⋯   , ∼ π [ r t + γ G t + 1 ∣ s t = s ] = ∑ a ∈ A π ( a ∣ s ) ∑ s ′ ∈ S p ( s ′ ∣ s , a ) E s t + 1 , a t + 1 , s t + 2 , ⋯   , ∼ π [ r ( s , a , s ′ ) + γ G t + 1 ∣ s t + 1 = s ′ ] = ∑ a ∈ A π ( a ∣ s ) ∑ s ′ ∈ S p ( s ′ ∣ s , a ) [ r ( s , a , s ′ ) + γ E s t + 1 , a t + 1 , s t + 2 , ⋯   , ∼ π [ G t + 1 ∣ s t + 1 = s ′ ] ] = ∑ a ∈ A π ( a ∣ s ) ∑ s ′ ∈ S p ( s ′ ∣ s , a ) [ r ( s , a , s ′ ) + γ V π ( s ′ ) ] ] \begin{aligned}V_{\pi}(s)&=\mathbb{E}_{s_t,a_t,s_{t+1},\cdots,\sim \pi}\left[G_t|s_t=s\right]\\&=\mathbb{E}_{s_t,a_t,s_{t+1},\cdots,\sim \pi}\left[r_t+\gamma G_{t+1}|s_t=s\right]\\&=\sum\limits_{a \in \mathcal{A}} \pi(a | s)\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)\mathbb{E}_{s_{t+1},a_{t+1},s_{t+2},\cdots,\sim \pi}\left[r(s,a,s^{\prime})+\gamma G_{t+1}|s_{t+1}=s^{\prime}\right]\\&=\sum\limits_{a \in \mathcal{A}} \pi(a | s)\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)\left[r(s,a,s^{\prime})+\gamma \mathbb{E}_{s_{t+1},a_{t+1},s_{t+2},\cdots,\sim \pi}[G_{t+1}|s_{t+1}=s^{\prime}\right]]\\&=\sum\limits_{a \in \mathcal{A}} \pi(a | s)\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)\left[r(s,a,s^{\prime})+\gamma V_{\pi}(s^{\prime})\right]]\end{aligned} Vπ​(s)​=Est​,at​,st+1​,⋯,∼π​[Gt​∣st​=s]=Est​,at​,st+1​,⋯,∼π​[rt​+γGt+1​∣st​=s]=a∈A∑​π(a∣s)s′∈S∑​p(s′∣s,a)Est+1​,at+1​,st+2​,⋯,∼π​[r(s,a,s′)+γGt+1​∣st+1​=s′]=a∈A∑​π(a∣s)s′∈S∑​p(s′∣s,a)[r(s,a,s′)+γEst+1​,at+1​,st+2​,⋯,∼π​[Gt+1​∣st+1​=s′]]=a∈A∑​π(a∣s)s′∈S∑​p(s′∣s,a)[r(s,a,s′)+γVπ​(s′)]]​
状态动作价值函数的贝尔曼期望方程为:
Q π ( s , a ) = E s t , a t , s t + 1 , a t + 1 , s t + 2 , ⋯   , ∼ π [ G t ∣ s t = s , a t = a ] = E s t + 1 , a t + 1 , s t + 2 , ⋯   , ∼ π [ r t + γ G t + 1 ∣ s t = s , a t = a ] = ∑ s ′ ∈ S p ( s ′ ∣ s , a ) E s t + 1 , a t + 1 , s t + 2 , ⋯   , ∼ π [ r ( s , a , s ′ ) + γ G t + 1 ∣ s t + 1 = s ′ ] = ∑ s ′ ∈ S p ( s ′ ∣ s , a ) [ r ( s , a , s ′ ) + γ E s t + 1 , a t + 1 , s t + 2 , ⋯   , ∼ π [ G t + 1 ∣ s t + 1 = s ′ ] ] = ∑ s ′ ∈ S p ( s ′ ∣ s , a ) [ r ( s , a , s ′ ) + γ V π ( s ′ ) ] = ∑ s ′ ∈ S p ( s ′ ∣ s , a ) [ r ( s , a , s ′ ) + γ ∑ a ′ ∈ A π ( a ′ ∣ s ′ ) Q π ( s ′ , a ′ ) ] \begin{aligned}Q_{\pi}(s,a)&=\mathbb{E}_{s_t,a_t,s_{t+1},a_{t+1},s_{t+2},\cdots,\sim \pi}\left[G_t|s_t=s,a_t=a\right]\\&=\mathbb{E}_{s_{t+1},a_{t+1},s_{t+2},\cdots,\sim \pi}\left[r_t+\gamma G_{t+1}|s_t=s,a_t=a\right]\\&=\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)\mathbb{E}_{s_{t+1},a_{t+1},s_{t+2},\cdots,\sim \pi}\left[r(s,a,s^{\prime})+\gamma G_{t+1}|s_{t+1}=s^{\prime}\right]\\&=\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)\left[r(s,a,s^{\prime})+\gamma \mathbb{E}_{s_{t+1},a_{t+1},s_{t+2},\cdots,\sim \pi}[G_{t+1}|s_{t+1}=s^{\prime}\right]]\\&=\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)\left[r(s,a,s^{\prime})+\gamma V_\pi(s^{\prime})\right]\\&=\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)\left[r(s,a,s^{\prime})+\gamma \sum\limits_{a^{\prime}\in \mathcal{A}}\pi(a^{\prime}|s^{\prime})Q_{\pi}(s^{\prime},a^{\prime})\right]\end{aligned} Qπ​(s,a)​=Est​,at​,st+1​,at+1​,st+2​,⋯,∼π​[Gt​∣st​=s,at​=a]=Est+1​,at+1​,st+2​,⋯,∼π​[rt​+γGt+1​∣st​=s,at​=a]=s′∈S∑​p(s′∣s,a)Est+1​,at+1​,st+2​,⋯,∼π​[r(s,a,s′)+γGt+1​∣st+1​=s′]=s′∈S∑​p(s′∣s,a)[r(s,a,s′)+γEst+1​,at+1​,st+2​,⋯,∼π​[Gt+1​∣st+1​=s′]]=s′∈S∑​p(s′∣s,a)[r(s,a,s′)+γVπ​(s′)]=s′∈S∑​p(s′∣s,a)[r(s,a,s′)+γa′∈A∑​π(a′∣s′)Qπ​(s′,a′)]​
由上公式推导可以发现状态价值函数和状态动作价值函数可以相互转换
V π ( s ) = ∑ a ∈ A π ( a ∣ s ) Q π ( s , a ) Q π ( s , a ) = r ( s , a ) + γ ∑ s ′ ∈ S P ( s ′ ∣ s , a ) V ( s ′ ) \begin{aligned}V_{\pi}(s)&=\sum\limits_{a\in \mathcal{A}}\pi(a|s)Q_{\pi}(s,a)\\Q_{\pi}(s,a)&=r(s,a)+\gamma\sum\limits_{s^{\prime}\in \mathcal{S}}P(s^{\prime}|s,a)V(s^{\prime})\end{aligned} Vπ​(s)Qπ​(s,a)​=a∈A∑​π(a∣s)Qπ​(s,a)=r(s,a)+γs′∈S∑​P(s′∣s,a)V(s′)​

π ( a ∣ s ) \pi(a|s) π(a∣s)和 p ( s ′ ∣ s , a ) p(s^{\prime}|s,a) p(s′∣s,a)的探讨

当前状态 s s s,执行动作 a a a,和下一状态 s ′ s^{\prime} s′,以及相关的概率分布 π ( a ∣ s ) \pi(a|s) π(a∣s)和 p ( s ′ ∣ s , a ) p(s^{\prime}|s,a) p(s′∣s,a)的关系示意图如下所示。策略 π ( a ∣ s ) \pi(a|s) π(a∣s)是状态价值函数 V V V和状态动作价值函数 Q Q Q里非常重要的一个概率分布,它表示的是给定状态 s s s,动作 a a a的概率分布。当 π \pi π是确定性策略的时候,即给定状态 s s s后,输出的动作 a a a就是明确的,则有 π ( a ∣ s ) = 1 \pi(a|s)=1 π(a∣s)=1或 π ( s ) = a \pi(s)=a π(s)=a。当 π \pi π是一个随机性策略的时候,即给定状态 s s s后,输出的动作 a a a是一个概率分布,则有 a ∼ π ( ⋅ ∣ s ) a \sim \pi(\cdot|s) a∼π(⋅∣s)。

转移概率 p ( s ′ ∣ s , a ) p(s^{\prime}|s,a) p(s′∣s,a)是状态动作价值函数 Q Q Q中一个非常重要的概率分布,它表示的是给定当前状态 s s s和动作 a a a,下一个状态 s ′ s^{\prime} s′的概率分布。这里需要值得注意的是,动作空间 A \mathcal{A} A的划分的不同会对转移概率 p ( s ′ ∣ s , a ) p(s^{\prime}|s,a) p(s′∣s,a)有很直接的影响。中国象棋棋盘里“仕”只能在四宫格里进行对角走动,假定当前状态 s s s表示的是“仕”在四宫格的中心位置,执行动作空间 A 1 = { \mathcal{A_1}=\{ A1​={仕五进六,仕五进四,仕五退六,仕五退四 } \} }里个一个动作 a a a,进入到一下个状态 s ′ s^{\prime} s′,此时则有转移概率 p ( s i ′ ∣ s 1 , a i ) = 1 ( i ∈ { 1 , 2 , 3 , 4 } ) p(s_i^{\prime}|s_1,a_i)=1(i\in\{1,2,3,4\}) p(si′​∣s1​,ai​)=1(i∈{1,2,3,4}),即给定当前状态 s s s和动作 a a a,那么下一个状态 s ′ s^{\prime} s′就确定了。

如下图所示,当把动作空间划分为 A 2 = { \mathcal{A_2}=\{ A2​={进仕,退仕 } \} },相对动作空间 A 1 \mathcal{A_1} A1​的划分颗粒度更大,其中“进仕”这个动作又可以划分为仕五进六,仕五进四这两个动作;“退仕”这个动作又可以划分为仕五进六,仕五进四这两个动作。此时给定当前状态 s s s和动作 a a a,那么下一个状态 s ′ s^{\prime} s′具有概率性,即 0 ≤ p ( s i ′ ∣ s 1 , a k ) ≤ 1 ( k ∈ { 1 , 2 } , i ∈ { 1 , 2 , 3 , 4 } ) 0 \le p(s_i^{\prime}|s_1,a_k)\le1(k\in \{1,2\}, i\in\{1,2,3,4\}) 0≤p(si′​∣s1​,ak​)≤1(k∈{1,2},i∈{1,2,3,4})。

相关定理证明

定理1:给定一个策略 π \pi π,状态价值函数 V π ( s ) V_\pi(s) Vπ​(s)的贝尔曼算子 M \mathcal{M} M为 M [ V π ( s ) ] = ∑ a ∈ A π ( a ∣ s ) ∑ s ′ ∈ S p ( s ′ ∣ s , a ) [ r ( s , a , s ′ ) + γ V π ( s ′ ) ] , ∀ s ∈ S \mathcal{M}[V_\pi(s)]=\sum\limits_{a \in \mathcal{A}} \pi(a | s)\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)\left[r(s,a,s^{\prime})+\gamma V_{\pi}(s^{\prime})\right],\quad \forall s \in \mathcal{S} M[Vπ​(s)]=a∈A∑​π(a∣s)s′∈S∑​p(s′∣s,a)[r(s,a,s′)+γVπ​(s′)],∀s∈S可知状态价值函数 V π ( s ) V_\pi(s) Vπ​(s)的贝尔曼算子 M \mathcal{M} M是收敛的。

证明:在策略 π \pi π下,给定任意两个在状态 s s s下的状态价值函数 V π 1 ( s ) V_\pi^1(s) Vπ1​(s)和 V π 2 ( s ) V_\pi^2(s) Vπ2​(s),则有 ∣ M [ V π 1 ( s ) ] − M [ V π 2 ( s ) ] ∣ = ∣ ∑ α ∈ A π ( a ∣ s ) ∑ s ′ ∈ S p ( s ′ ∣ s , a ) [ γ ( V π 1 ( s ′ ) − V π 2 ( s ′ ) ) ] ∣ ≤ γ ∑ α ∈ A π ( a ∣ s ) ∑ s ′ ∈ S p ( s ′ ∣ s , a ) ∣ V π 1 ( s ′ ) − V π 2 ( s ′ ) ∣ ≤ γ ∑ α ∈ A π ( a ∣ s ) ∑ s ′ ∈ S p ( s ′ ∣ s , a ) ( max ⁡ s ′ ′ ∈ S ∣ V π 1 ( s ′ ′ ) − V π 2 ( s ′ ′ ) ∣ ) = γ max ⁡ s ′ ′ ∈ S ∣ V π 1 ( s ′ ′ ) − V π 2 ( s ′ ′ ) ∣ = γ ∥ V π 1 − V π 2 ∥ ∞ \begin{aligned}\left|\mathcal{M}[V_\pi^1 (s)]-\mathcal{M}[V_\pi^2 (s)]\right|&=\left|\sum\limits_{\alpha\in \mathcal{A}}\pi(a|s)\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)\left[\gamma(V^1_\pi(s^{\prime})-V^2_\pi(s^{\prime}))\right]\right|\\&\le\gamma \sum\limits_{\alpha\in \mathcal{A}}\pi(a|s)\sum\limits_{s^{\prime}\in S}p(s^{\prime}|s,a)|V_\pi^{1}(s^{\prime})-V_\pi^2(s^{\prime})|\\&\le\gamma \sum\limits_{\alpha \in \mathcal{A}}\pi(a|s)\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)\left(\max\limits_{s^{\prime\prime}\in \mathcal{S}}\left|V_\pi^1(s^{\prime\prime})-V_\pi^2(s^{\prime\prime})\right|\right)\\&=\gamma \max\limits_{s^{\prime\prime}\in \mathcal{S}}|V_\pi^1(s^{\prime\prime})-V_\pi^2(s^{\prime\prime})|\\&=\gamma\|V^1_{\pi}-V^2_{\pi}\|_{\infty}\end{aligned} ∣∣​M[Vπ1​(s)]−M[Vπ2​(s)]∣∣​​=∣∣∣∣∣​α∈A∑​π(a∣s)s′∈S∑​p(s′∣s,a)[γ(Vπ1​(s′)−Vπ2​(s′))]∣∣∣∣∣​≤γα∈A∑​π(a∣s)s′∈S∑​p(s′∣s,a)∣Vπ1​(s′)−Vπ2​(s′)∣≤γα∈A∑​π(a∣s)s′∈S∑​p(s′∣s,a)(s′′∈Smax​∣∣​Vπ1​(s′′)−Vπ2​(s′′)∣∣​)=γs′′∈Smax​∣Vπ1​(s′′)−Vπ2​(s′′)∣=γ∥Vπ1​−Vπ2​∥∞​​由上公式可以推知,给定一个状态空间 S \mathcal{S} S,则有 ∥ M [ V π 1 ] − M [ V π 2 ] ∥ ∞ ≤ γ ∥ V π 1 − V π 2 ∥ ∞ \left\|\mathcal{M}[V_\pi^1]-\mathcal{M}[V_\pi^2 ]\right\|_\infty\le\gamma \left\|V^1_\pi - V^2_\pi\right\|_\infty ∥∥​M[Vπ1​]−M[Vπ2​]∥∥​∞​≤γ∥∥​Vπ1​−Vπ2​∥∥​∞​当 γ < 1 \gamma<1 γ<1的时,状态价值函数的贝尔曼算子 M \mathcal{M} M是一个收缩映射,进而则有 ∥ M n + 1 [ V π ] − M n [ V π ] ∥ ∞ ≤ γ ∥ M n [ V π ] − M n − 1 [ V π ] ∥ ∞ ≤ γ 2 ∥ M n − 1 [ V π ] − M n − 2 [ V π ] ∥ ∞ ≤ γ 3 ∥ M n − 2 [ V π ] − M n − 3 [ V π ] ∥ ∞ ⋮ ≤ γ n − 1 ∥ M 2 [ V π ] − M [ V π ] ∥ ∞ ≤ γ n ∥ M [ V π ] − V π ∥ ∞ \begin{aligned}\left\|\mathcal{M}^{n+1}[V_\pi]-\mathcal{M}^n[V_\pi]\right\|_\infty &\le \gamma \left\|\mathcal{M}^{n}[V_\pi]-\mathcal{M}^{n-1}[V_\pi]\right\|_\infty\\ &\le \gamma^2 \left\|\mathcal{M}^{n-1}[V_\pi]-\mathcal{M}^{n-2}[V_\pi]\right\|_\infty \\ &\le \gamma^3\left\|\mathcal{M}^{n-2}[V_\pi]-\mathcal{M}^{n-3}[V_\pi]\right\|_\infty \\&\quad \quad \quad \quad \quad \vdots\\ &\le \gamma^{n-1} \left\|\mathcal{M}^{2}[V_\pi]-\mathcal{M}[V_\pi]\right\|_\infty \\ &\le \gamma^n\left\|\mathcal{M}[V_\pi]-V_\pi\right\|_\infty\end{aligned} ∥∥​Mn+1[Vπ​]−Mn[Vπ​]∥∥​∞​​≤γ∥∥​Mn[Vπ​]−Mn−1[Vπ​]∥∥​∞​≤γ2∥∥​Mn−1[Vπ​]−Mn−2[Vπ​]∥∥​∞​≤γ3∥∥​Mn−2[Vπ​]−Mn−3[Vπ​]∥∥​∞​⋮≤γn−1∥∥​M2[Vπ​]−M[Vπ​]∥∥​∞​≤γn∥M[Vπ​]−Vπ​∥∞​​当 n n n趋近于无穷时, M n + 1 [ V π ] \mathcal{M}^{n+1}[V_\pi] Mn+1[Vπ​]和 M n [ V π ] \mathcal{M}^{n}[V_\pi] Mn[Vπ​]的差值会趋近于0,所以序列 { V π , M [ V π ] , M 2 [ V π ] , ⋯   } \{V_\pi, \mathcal{M}[V_\pi],\mathcal{M}^2[V_\pi],\cdots\} {Vπ​,M[Vπ​],M2[Vπ​],⋯}是收敛的,并且会收敛到一个不动点上。

定理2:给定任意两个确定的策略 π \pi π和 π ∗ \pi^* π∗,对任意状态 s ∈ S s \in \mathcal{S} s∈S,如果有 Q π ( s , π ∗ ( s ) ) ≥ V π ( s ) Q_\pi(s,\pi^{*}(s))\ge V_\pi(s) Qπ​(s,π∗(s))≥Vπ​(s)其中 π ∗ ( s ) ∈ A \pi^{*}(s)\in \mathcal{A} π∗(s)∈A,那么则有 V π ∗ ( s ) ≥ V π ( s ) , s ∈ S V_{\pi^{*}}(s)\ge V_{\pi}(s),\quad s\in\mathcal{S} Vπ∗​(s)≥Vπ​(s),s∈S此时则称策略 π ∗ \pi^{*} π∗会优于策略 π \pi π。

证明:已知给定任意两个确定的策略 π \pi π和 π ∗ \pi^* π∗,且对任意状态 s ∈ S s \in \mathcal{S} s∈S,有 Q π ( s , π ∗ ( s ) ) ≥ V π ( s ) Q_\pi(s,\pi^{*}(s))\ge V_\pi(s) Qπ​(s,π∗(s))≥Vπ​(s)进而可推知 Q π ( s , π ∗ ( s ) ) = E s t , a t ∼ π ∗ , s t + 1 ∼ π [ r t + γ V π ( s t + 1 ) ∣ s t = s , a t = π ∗ ( s ) ] = E s t , a t ∼ π ∗ , s t + 1 ∼ π [ r t + γ V π ( s t + 1 ) ∣ s t = s ] ≤ E s t , a t ∼ π ∗ , s t + 1 ∼ π [ r t + γ Q π ( s t + 1 , π ∗ ( s t + 1 ) ) ∣ s t = s , a t = π ∗ ( s ) ] = E s t , a t ∼ π ∗ , s t + 1 ∼ π [ r t + γ E s t + 1 , a t + 1 ∼ π ∗ , s t + 2 ∼ π [ r t + 2 + γ V π ( s t + 2 ) ] ∣ s t + 1 , a t + 1 = π ∗ ( s t + 1 ) ∣ s t = s ] = E s t , a t , s t + 1 , a t + 1 ∼ π ∗ , s t + 2 ∼ π [ r t + γ r t + 1 + γ 2 V π ( s t + 2 ) ∣ s t = s ] ≤ E s t , a t , s t + 1 , a t + 1 , s t + 2 ∼ π ∗ , s t + 3 ∼ π [ r t + γ r t + 1 + γ 2 r t + 2 + γ 3 V π ( s t + 3 ) ∣ s t = s ] ⋮ ≤ E s t , a t , s t + 1 , a t + 1 , s t + 2 , ⋯ ∼ π ∗ [ r t + γ r t + 1 + γ 2 r t + 2 + γ 3 r t + 3 + ⋯ ∣ s t = s ] = V π ∗ ( s ) \begin{aligned}Q_\pi(s,\pi^{*}(s))&=\mathbb{E}_{s_t,a_t \sim \pi ^{*},s_{t+1}\sim \pi}[r_t +\gamma V_\pi(s_{t+1})|s_t=s,a_t=\pi^{*}(s)]\\&=\mathbb{E}_{s_t,a_t \sim \pi ^{*},s_{t+1}\sim \pi}[r_t +\gamma V_\pi(s_{t+1})|s_t=s]\\&\le \mathbb{E}_{s_t,a_t \sim \pi ^{*},s_{t+1}\sim \pi}[r_t +\gamma Q_\pi(s_{t+1},\pi^{*}(s_{t+1}))|s_t=s,a_t=\pi^{*}(s)]\\&=\mathbb{E}_{s_t,a_t \sim \pi ^{*},s_{t+1}\sim \pi}[r_t +\gamma \mathbb{E}_{s_{t+1},a_{t+1}\sim \pi^{*},s_{t+2}\sim \pi}[r_{t+2}+\gamma V_{\pi}(s_{t+2})]|s_{t+1},a_{t+1}=\pi^{*}(s_{t+1})|s_t=s]\\&=\mathbb{E}_{s_t,a_t,s_{t+1},a_{t+1}\sim \pi^{*},s_{t+2}\sim \pi}[r_{t}+\gamma r_{t+1}+\gamma^2 V_\pi(s_{t+2})|s_t=s]\\&\le\mathbb{E}_{s_t,a_t,s_{t+1},a_{t+1},s_{t+2}\sim \pi^{*},s_{t+3}\sim \pi}[r_{t}+\gamma r_{t+1}+\gamma^2 r_{t+2} + \gamma^3 V_\pi(s_{t+3})|s_t=s]\\&\quad \quad \quad \quad \vdots\\&\le \mathbb{E}_{s_t,a_t,s_{t+1},a_{t+1},s_{t+2},\cdots\sim \pi^{*}}[r_{t}+\gamma r_{t+1}+\gamma^2 r_{t+2} + \gamma^3 r_{t+3}+\cdots|s_t=s]\\&=V_{\pi^{*}}(s)\end{aligned} Qπ​(s,π∗(s))​=Est​,at​∼π∗,st+1​∼π​[rt​+γVπ​(st+1​)∣st​=s,at​=π∗(s)]=Est​,at​∼π∗,st+1​∼π​[rt​+γVπ​(st+1​)∣st​=s]≤Est​,at​∼π∗,st+1​∼π​[rt​+γQπ​(st+1​,π∗(st+1​))∣st​=s,at​=π∗(s)]=Est​,at​∼π∗,st+1​∼π​[rt​+γEst+1​,at+1​∼π∗,st+2​∼π​[rt+2​+γVπ​(st+2​)]∣st+1​,at+1​=π∗(st+1​)∣st​=s]=Est​,at​,st+1​,at+1​∼π∗,st+2​∼π​[rt​+γrt+1​+γ2Vπ​(st+2​)∣st​=s]≤Est​,at​,st+1​,at+1​,st+2​∼π∗,st+3​∼π​[rt​+γrt+1​+γ2rt+2​+γ3Vπ​(st+3​)∣st​=s]⋮≤Est​,at​,st+1​,at+1​,st+2​,⋯∼π∗​[rt​+γrt+1​+γ2rt+2​+γ3rt+3​+⋯∣st​=s]=Vπ∗​(s)​进而则有 V π ∗ ( s ) ≥ Q π ( s , π ∗ ( s ) ) ≥ V π ( s ) , s ∈ S V_{\pi^{*}}(s)\ge Q_\pi(s,\pi^{*}(s))\ge V_{\pi}(s),\quad s\in\mathcal{S} Vπ∗​(s)≥Qπ​(s,π∗(s))≥Vπ​(s),s∈S
由定理2可知,给定一个策略 π \pi π,已知当前状态 s s s,根据状态动作函数 Q π ( s , a ) Q_\pi(s,a) Qπ​(s,a)找到一个最优动作所得到的新的策略 π ∗ \pi^{*} π∗会优于或等于原始策略 π \pi π。如果对所有的状态 s ∈ S s\in \mathcal{S} s∈S都这么操作,则可以得到新的策略 π ∗ \pi^{*} π∗。当策略改进方法收敛到最优策略时,则有 V π ∗ ( s ) = V π ( s ) V_{\pi^{*}}(s)=V_\pi(s) Vπ∗​(s)=Vπ​(s),此时则有
V π ∗ ( s ) = ∑ a ∈ A π ∗ ( a ∣ s ) Q π ∗ ( s , a ) = ∑ a ∈ A π ∗ ( a ∣ s ) max ⁡ a ∈ A Q π ( s , a ) = max ⁡ a ∈ A Q π ( s , a ) = max ⁡ a ∈ A ∑ s ′ ∈ S p ( s ′ ∣ s , a ) [ r ( s , a , s ′ ) + γ V π ( s ′ ) ] = max ⁡ a ∈ A ∑ s ′ ∈ S p ( s ′ ∣ s , a ) [ r ( s , a , s ′ ) + γ V π ∗ ( s ′ ) ] \begin{aligned}V_{\pi^{*}}(s)&=\sum\limits_{a\in\mathcal{A}}\pi^{*}(a|s)Q_{\pi^{*}}(s,a)\\&=\sum\limits_{a\in\mathcal{A}}\pi^{*}(a|s)\max\limits_{a \in \mathcal{A}}Q_\pi(s,a)=\max\limits_{a\in\mathcal{A}}Q_{\pi}(s,a)\\&=\max\limits_{a\in \mathcal{A}}\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)[r(s,a,s^{\prime})+\gamma V_{\pi}(s^{\prime})]\\&=\max\limits_{a\in \mathcal{A}}\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)[r(s,a,s^{\prime})+\gamma V_{\pi^{*}}(s^{\prime})]\end{aligned} Vπ∗​(s)​=a∈A∑​π∗(a∣s)Qπ∗​(s,a)=a∈A∑​π∗(a∣s)a∈Amax​Qπ​(s,a)=a∈Amax​Qπ​(s,a)=a∈Amax​s′∈S∑​p(s′∣s,a)[r(s,a,s′)+γVπ​(s′)]=a∈Amax​s′∈S∑​p(s′∣s,a)[r(s,a,s′)+γVπ∗​(s′)]​最后一个等式就是贝尔曼最优方程。

定理3:状态价值函数 V ( s ) V(s) V(s)的贝尔曼最优算子 M ∗ \mathcal{M_{*}} M∗​为 M ∗ [ V ( s ) ] = max ⁡ a ∈ A ∑ s ′ ∈ S p ( s ′ ∣ s , a ) [ r ( s , a , s ′ ) + γ V ( s ′ ) ] , ∀ s ∈ S \mathcal{M}_{*}[V(s)]=\max\limits_{a\in \mathcal{A}}\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)\left[r(s,a,s^{\prime})+\gamma V(s^{\prime})\right],\quad \forall s \in \mathcal{S} M∗​[V(s)]=a∈Amax​s′∈S∑​p(s′∣s,a)[r(s,a,s′)+γV(s′)],∀s∈S则可知状态价值函数 V ( s ) V(s) V(s)的贝尔曼最优算子 M ∗ \mathcal{M}_{*} M∗​是收敛的。

证明:状态价值函数 V ( s ) V(s) V(s)的贝尔曼最优算子 M ∗ \mathcal{M_{*}} M∗​为 M ∗ [ V ( s ) ] = max ⁡ a ∈ A ∑ s ′ ∈ S p ( s ′ ∣ s , a ) [ r ( s , a , s ′ ) + γ V ( s ′ ) ] ] , ∀ s ∈ S \mathcal{M}_{*}[V(s)]=\max\limits_{a\in \mathcal{A}}\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a)\left[r(s,a,s^{\prime})+\gamma V(s^{\prime})\right]],\quad \forall s \in \mathcal{S} M∗​[V(s)]=a∈Amax​s′∈S∑​p(s′∣s,a)[r(s,a,s′)+γV(s′)]],∀s∈S则有 ∣ M ∗ [ V 1 ( s ) ] − M ∗ [ V 2 ( s ) ] ∣ = ∣ max ⁡ a 1 ∈ A ∑ s ′ ∈ S p ( s ′ ∣ s , a 1 ) [ r ( s , a 1 , s ′ ) + γ V 1 ( s ′ ) ] − max ⁡ a 2 ∈ A ∑ s ′ ∈ S p ( s ′ ∣ s , a 2 ) [ r ( s , a 2 , s ′ ) + γ V 2 ( s ′ ) ] ∣ ≤ ∣ max ⁡ a 1 ∈ A ( ∑ s ′ ∈ S p ( s ′ ∣ s , a 1 ) [ r ( s , a 1 , s ′ ) + γ V 1 ( s ′ ) ] − ∑ s ′ ∈ S p ( s ′ ∣ s , a 1 ) [ r ( s , a 1 , s ′ ) + γ V 2 ( s ′ ) ] ) ∣ = γ ∣ max ⁡ a ∈ A ( ∑ s ′ ∈ S p ( s ′ ∣ s , a ) [ V 1 ( s ′ ) − V 2 ( s ′ ) ] ) ∣ ≤ γ max ⁡ a ∈ A ∣ ∑ s ′ ∈ S p ( s ′ ∣ s , a ) [ V 1 ( s ′ ) − V 2 ( s ′ ) ] ∣ ≤ γ max ⁡ a ∈ A [ ∑ s ′ ∈ S p ( s ′ ∣ s , a ) ∣ V 1 ( s ′ ) − V 2 ( s ′ ) ∣ ] ≤ γ max ⁡ a ∈ A [ ∑ s ′ ∈ S p ( s ′ ∣ s , a ) ∥ V 1 ( s ) − V 2 ( s ) ∥ ∞ ] = γ max ⁡ a ∈ A ∥ V 1 ( s ) − V 2 ( s ) ∥ ∞ = γ ∥ V 1 ( s ) − V 2 ( s ) ∥ ∞ \begin{aligned}\left|\mathcal{M}_{*}[V^1(s)]-\mathcal{M}_{*}[V^2(s)]\right|&=\left|\max\limits_{a_1\in \mathcal{A}}\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a_1)\left[r(s,a_1,s^{\prime})+\gamma V^1(s^{\prime})\right]-\max\limits_{a_2\in \mathcal{A}}\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a_2)\left[r(s,a_2,s^{\prime})+\gamma V^2(s^{\prime})\right]\right|\\& \le \left|\max\limits_{a_1\in \mathcal{A}}\left(\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a_1)\left[r(s,a_1,s^{\prime})+\gamma V^1(s^{\prime})\right]-\sum\limits_{s^{\prime}\in \mathcal{S}}p(s^{\prime}|s,a_1)\left[r(s,a_1,s^{\prime})+\gamma V^2(s^{\prime})\right]\right)\right|\\&=\gamma\left|\max\limits_{a \in \mathcal{A}}\left(\sum\limits_{s^{\prime}\in\mathcal{S}}p(s^{\prime}|s,a)[V^1(s^{\prime})-V^2(s^{\prime})]\right)\right|\\&\le \gamma\max\limits_{a \in \mathcal{A}}\left|\sum\limits_{s^{\prime}\in\mathcal{S}}p(s^{\prime}|s,a)[V^1(s^{\prime})-V^2(s^{\prime})]\right|\\&\le \gamma\max\limits_{a \in \mathcal{A}}\left[\sum\limits_{s^{\prime}\in\mathcal{S}}p(s^{\prime}|s,a)|V^1(s^{\prime})-V^2(s^{\prime})|\right]\\&\le \gamma\max\limits_{a \in \mathcal{A}}\left[\sum\limits_{s^{\prime}\in\mathcal{S}}p(s^{\prime}|s,a)\|V^1(s)-V^2(s)\|_{\infty}\right]\\&=\gamma \max\limits_{a \in \mathcal{A}}\|V^{1}(s)-V^{2}(s)\|_{\infty}=\gamma \|V^{1}(s)-V^{2}(s)\|_{\infty}\end{aligned} ∣∣​M∗​[V1(s)]−M∗​[V2(s)]∣∣​​=∣∣∣∣∣​a1​∈Amax​s′∈S∑​p(s′∣s,a1​)[r(s,a1​,s′)+γV1(s′)]−a2​∈Amax​s′∈S∑​p(s′∣s,a2​)[r(s,a2​,s′)+γV2(s′)]∣∣∣∣∣​≤∣∣∣∣∣​a1​∈Amax​(s′∈S∑​p(s′∣s,a1​)[r(s,a1​,s′)+γV1(s′)]−s′∈S∑​p(s′∣s,a1​)[r(s,a1​,s′)+γV2(s′)])∣∣∣∣∣​=γ∣∣∣∣∣​a∈Amax​(s′∈S∑​p(s′∣s,a)[V1(s′)−V2(s′)])∣∣∣∣∣​≤γa∈Amax​∣∣∣∣∣​s′∈S∑​p(s′∣s,a)[V1(s′)−V2(s′)]∣∣∣∣∣​≤γa∈Amax​[s′∈S∑​p(s′∣s,a)∣V1(s′)−V2(s′)∣]≤γa∈Amax​[s′∈S∑​p(s′∣s,a)∥V1(s)−V2(s)∥∞​]=γa∈Amax​∥V1(s)−V2(s)∥∞​=γ∥V1(s)−V2(s)∥∞​​当 γ < 1 \gamma<1 γ<1的时,状态价值函数的贝尔曼最优算子 M ∗ \mathcal{M}_{*} M∗​是一个收缩映射,进而则有 ∥ M ∗ n + 1 [ V ] − M ∗ n [ V ] ∥ ∞ ≤ γ ∥ M ∗ n [ V ] − M ∗ n − 1 [ V ] ∥ ∞ ≤ γ 2 ∥ M ∗ n − 1 [ V ] − M ∗ n − 2 [ V ] ∥ ∞ ≤ γ 3 ∥ M ∗ n − 2 [ V ] − M ∗ n − 3 [ V ] ∥ ∞ ⋮ ≤ γ n − 1 ∥ M ∗ 2 [ V ] − M ∗ [ V ] ∥ ∞ ≤ γ n ∥ M ∗ [ V ] − V ∥ ∞ \begin{aligned}\left\|\mathcal{M}^{n+1}_*[V]-\mathcal{M}^n_*[V]\right\|_\infty &\le \gamma \left\|\mathcal{M}^{n}_*[V]-\mathcal{M}^{n-1}_*[V]\right\|_\infty\\ &\le \gamma^2 \left\|\mathcal{M}^{n-1}_*[V]-\mathcal{M}^{n-2}_*[V]\right\|_\infty \\ &\le \gamma^3\left\|\mathcal{M}^{n-2}_*[V]-\mathcal{M}^{n-3}_*[V]\right\|_\infty \\&\quad \quad \quad \quad \quad \vdots\\ &\le \gamma^{n-1} \left\|\mathcal{M}^{2}_*[V]-\mathcal{M}^*[V]\right\|_\infty \\ &\le \gamma^n\left\|\mathcal{M}_*[V]-V\right\|_\infty\end{aligned} ∥∥​M∗n+1​[V]−M∗n​[V]∥∥​∞​​≤γ∥∥​M∗n​[V]−M∗n−1​[V]∥∥​∞​≤γ2∥∥​M∗n−1​[V]−M∗n−2​[V]∥∥​∞​≤γ3∥∥​M∗n−2​[V]−M∗n−3​[V]∥∥​∞​⋮≤γn−1∥∥​M∗2​[V]−M∗[V]∥∥​∞​≤γn∥M∗​[V]−V∥∞​​当 n n n趋近于无穷时, M ∗ n + 1 [ V ] \mathcal{M}^{n+1}_*[V] M∗n+1​[V]和 M ∗ n [ V ] \mathcal{M}^{n}_*[V] M∗n​[V]的差值会趋近于0,所以序列 { V , M ∗ [ V ] , M ∗ 2 [ V ] , ⋯   } \{V, \mathcal{M}_*[V],\mathcal{M}^2_*[V],\cdots\} {V,M∗​[V],M∗2​[V],⋯}是收敛的,并且会收敛到一个不动点上。

标签:prime,方程,解析,limits,st,贝尔曼,mathcal,pi,gamma
来源: https://blog.csdn.net/qq_38406029/article/details/121370105