其他分享
首页 > 其他分享> > 递归最小二乘估计

递归最小二乘估计

作者:互联网

@[TOC](递归最小二乘估计(Recursive Least Square Estimation))

递归最小二乘估计(Recursive Least Square Estimation)

随着测量次数增加,最小二乘计算量计算量会快速增加,递归最小二乘给出了在线实时计算的递推方程。

矩阵的迹

可参考链接
在这里插入图片描述
导数性质
∂ T r ( A X T ) ∂ X = A \begin{aligned} \frac{\partial{Tr(AX^T)}}{\partial{X}} = A \end{aligned} ∂X∂Tr(AXT)​=A​

推导

x ^ k − 1 , y k \hat x_{k-1},y_k x^k−1​,yk​分别为k-1时的估计值及k时测量值,线性递归估计可以表示为:

y k = H k x + v k y_k=H_kx+v_k yk​=Hk​x+vk​

x ^ k = x ^ k − 1 + K k ( y k − H k x ^ k − 1 ) \hat x_k = \hat x_{k-1}+K_k(y_k-H_k\hat x_{k-1}) x^k​=x^k−1​+Kk​(yk​−Hk​x^k−1​)

在已知估计量 x ^ k − 1 \hat x_{k-1} x^k−1​和当前测量值 y k y_k yk​基础上计算 x ^ k \hat x_{k} x^k​。 K k K_k Kk​为增益矩阵, ( y k − H k x ^ k − 1 ) (y_k-H_k\hat x_{k-1}) (yk​−Hk​x^k−1​)为矫正项。如果 ( y k − H k x ^ k − 1 ) = 0 (y_k-H_k\hat x_{k-1})=0 (yk​−Hk​x^k−1​)=0,则 x ^ k = x ^ k − 1 \hat x_k = \hat x_{k-1} x^k​=x^k−1​。
估计量误差均值可以计算如下:
E ( ϵ x , k ) = E ( x − x ^ ) = E ( x − x ^ k − 1 − K k ( y k − H k x ^ k − 1 ) ) = E ( ϵ x , k − 1 − K k ( H k x + v k − H k x ^ k − 1 ) ) = E ( ϵ x , k − 1 − K k H k ϵ x , k − 1 − K k v k ) = ( I − K k H k ) E ( ϵ x , k − 1 ) − K k E ( v k ) \begin{aligned} E(\epsilon_{x,k})&= E(x-\hat x) \\ &=E(x-\hat x_{k-1}-K_k(y_k-H_k\hat x_{k-1}))\\ &=E(\epsilon_{x,k-1}-K_k(H_kx+v_k -H_k\hat x_{k-1}))\\ &=E(\epsilon_{x,k-1}-K_kH_k\epsilon_{x,k-1}-K_kv_k)\\ &=(I-K_kH_k)E(\epsilon_{x,k-1})-K_k E(v_k) \end{aligned} E(ϵx,k​)​=E(x−x^)=E(x−x^k−1​−Kk​(yk​−Hk​x^k−1​))=E(ϵx,k−1​−Kk​(Hk​x+vk​−Hk​x^k−1​))=E(ϵx,k−1​−Kk​Hk​ϵx,k−1​−Kk​vk​)=(I−Kk​Hk​)E(ϵx,k−1​)−Kk​E(vk​)​
显然,如果 E ( ϵ x , k − 1 ) = 0 E(\epsilon_{x,k-1})=0 E(ϵx,k−1​)=0和 E ( v k ) = 0 E(v_k)=0 E(vk​)=0,则 ( ϵ x , k ) = 0 (\epsilon_{x,k})=0 (ϵx,k​)=0。
但如果测量噪声 v k v_k vk​均值不为0,估计量的初值设为估计值的期望 x ^ 0 = E ( x ) ) \hat x_0=E(x)) x^0​=E(x)),如此对任意k,有 x k = E ( x ^ k x_k=E(\hat x_k xk​=E(x^k​),因而最小二乘估计为无偏估计。

求增益矩阵 K k K_k Kk​

优化的目标选取最小化k时估计量的方差,即代价函数
J k = E [ ( x 1 − x ^ 1 ) 2 + ( x 2 − x ^ 2 ) 2 + ⋯ + ( x n − x ^ n ) 2 ] = E [ ϵ x 1 , k 2 + ϵ x 2 , k 2 + ⋯ + ϵ x n , k 2 ] = E [ ϵ x , k T ϵ x , k ] = E [ T r ( ϵ x , k ϵ x , k T ) ] = T r ( P k ) \begin{aligned} J_k&=E[(x_1-\hat x_1)^2+(x_2-\hat x_2)^2+\dots+(x_n-\hat x_n)^2]\\ &=E[\epsilon_{x1,k}^2+\epsilon_{x2,k}^2+\dots+\epsilon_{xn,k}^2]\\ &=E[\epsilon_{x,k}^T\epsilon_{x,k}]\\ &=E[Tr(\epsilon_{x,k}\epsilon_{x,k}^T)]\\ &=Tr(P_k) \end{aligned} Jk​​=E[(x1​−x^1​)2+(x2​−x^2​)2+⋯+(xn​−x^n​)2]=E[ϵx1,k2​+ϵx2,k2​+⋯+ϵxn,k2​]=E[ϵx,kT​ϵx,k​]=E[Tr(ϵx,k​ϵx,kT​)]=Tr(Pk​)​
即协方差矩阵 P k = E [ ϵ x , k ϵ x , k T ] P_k=E[\epsilon_{x,k}\epsilon_{x,k}^T] Pk​=E[ϵx,k​ϵx,kT​],代入得
P k = E [ ϵ x , k ϵ x , k T ] = E [ ( ( I − K k H k ) E ( ϵ x , k − 1 ) − K k E ( v k ) ) ( ( I − K k H k ) E ( ϵ x , k − 1 ) − K k E ( v k ) ) T ] = ( I − K k H k ) E ( ϵ x , k − 1 ) E ( ϵ x , k − 1 ) T ( I − K k H k ) T − ( I − K k H k ) E ( ϵ x , k − 1 ) E ( v k ) T K k T − K k E ( v k ) E ( ϵ x , k − 1 ) T ( I − K k H k ) T + K k E ( v k ) E ( v k ) T K k T = ( I − K k H k ) E ( ϵ x , k − 1 ϵ x , k − 1 T ) ( I − K k H k ) T + K k E ( v k v k T ) K k T \begin{aligned} P_k&=E[\epsilon_{x,k}\epsilon_{x,k}^T]\\ &=E[((I-K_kH_k)E(\epsilon_{x,k-1})-K_k E(v_k))((I-K_kH_k)E(\epsilon_{x,k-1})-K_k E(v_k))^T]\\ &=(I-K_kH_k)E(\epsilon_{x,k-1})E(\epsilon_{x,k-1})^T(I-K_kH_k)^T-(I-K_kH_k)E(\epsilon_{x,k-1})E(v_k)^TK_k^T\\&-K_k E(v_k)E(\epsilon_{x,k-1})^T(I-K_kH_k)^T+K_k E(v_k)E(v_k)^TK_k^T\\ &=(I-K_kH_k)E(\epsilon_{x,k-1}\epsilon_{x,k-1}^T)(I-K_kH_k)^T+K_k E(v_kv_k^T)K_k^T\\ \end{aligned} Pk​​=E[ϵx,k​ϵx,kT​]=E[((I−Kk​Hk​)E(ϵx,k−1​)−Kk​E(vk​))((I−Kk​Hk​)E(ϵx,k−1​)−Kk​E(vk​))T]=(I−Kk​Hk​)E(ϵx,k−1​)E(ϵx,k−1​)T(I−Kk​Hk​)T−(I−Kk​Hk​)E(ϵx,k−1​)E(vk​)TKkT​−Kk​E(vk​)E(ϵx,k−1​)T(I−Kk​Hk​)T+Kk​E(vk​)E(vk​)TKkT​=(I−Kk​Hk​)E(ϵx,k−1​ϵx,k−1T​)(I−Kk​Hk​)T+Kk​E(vk​vkT​)KkT​​
因为 E ( v k ) E ( ϵ x , k − 1 ) T = 0 E(v_k)E(\epsilon_{x,k-1})^T=0 E(vk​)E(ϵx,k−1​)T=0,所以
P k = ( I − K k H k ) P k − 1 ( I − K k H k ) T + K k R k K k T P_k=(I-K_kH_k)P_{k-1}(I-K_kH_k)^T+K_k R_kK_k^T Pk​=(I−Kk​Hk​)Pk−1​(I−Kk​Hk​)T+Kk​Rk​KkT​
对代价函数关于增益矩阵求偏导,得
∂ J k ∂ K k = 2 ( I − K k H k ) P k − 1 ( − H k T ) + 2 K k R k \begin{aligned} {\partial J_k \over \partial K_k}=2(I-K_kH_k)P_{k-1}(-H_k^T)+2K_k R_k \end{aligned} ∂Kk​∂Jk​​=2(I−Kk​Hk​)Pk−1​(−HkT​)+2Kk​Rk​​
令 ∂ J k ∂ K k = 0 {\partial J_k \over \partial K_k}=0 ∂Kk​∂Jk​​=0,得
2 ( I − K k H k ) P k − 1 ( − H k T ) + 2 K k R k = 0 K k = P k − 1 H k T ( H k P k − 1 H k T + R k ) − 1 \begin{aligned} 2(I-K_kH_k)P_{k-1}(-H_k^T)+2K_k R_k = 0\\ K_k=P_{k-1}H_k^T(H_kP_{k-1}H_k^T+R_k)^{-1} \end{aligned} 2(I−Kk​Hk​)Pk−1​(−HkT​)+2Kk​Rk​=0Kk​=Pk−1​HkT​(Hk​Pk−1​HkT​+Rk​)−1​

递归最小二乘估计流程

1.初始化

x ^ 0 = E ( x ) \hat x_0=E(x) x^0​=E(x)

P 0 = E [ ( x − x ^ 0 ) ( x − x ^ 0 ) T ] P_0=E[(x-\hat x_0)(x - \hat x_0)^T] P0​=E[(x−x^0​)(x−x^0​)T]

2.for k = 1,2, … \dots …
K k = P k − 1 H k T ( H k P k − 1 H k T + R k ) − 1 K_k=P_{k-1}H_k^T(H_kP_{k-1}H_k^T+R_k)^{-1} Kk​=Pk−1​HkT​(Hk​Pk−1​HkT​+Rk​)−1

x ^ k = x ^ k − 1 + K k ( y k − H k x ^ k − 1 ) \hat x_k = \hat x_{k-1}+K_k(y_k-H_k\hat x_{k-1}) x^k​=x^k−1​+Kk​(yk​−Hk​x^k−1​)

P k = ( I − K k H k ) P k − 1 ( I − K k H k ) T + K k R k K k T P_k=(I-K_kH_k)P_{k-1}(I-K_kH_k)^T+K_k R_kK_k^T Pk​=(I−Kk​Hk​)Pk−1​(I−Kk​Hk​)T+Kk​Rk​KkT​

替代形式

标签:vk,递归,Kk,epsilon,kH,最小,Hk,hat,二乘
来源: https://blog.csdn.net/weixin_42170603/article/details/106504553