其他分享
首页 > 其他分享> > Minimizations for Random Variables

Minimizations for Random Variables

作者:互联网

1. Minimizations for Random Variables Using vextors

Given is a random vector \(Y = [Y_1, Y_2, · · · , Y_N ]^T\). We want to predict a random variable X using Y. The random variable X and the random vector Y are not independent。
此时,我们用\(A(\pmb Y)\)这些适量的组合预测X.

1.1 (a)

Proof the iterative expectation rule:

\[E\{E\{X|Y\}\}=E\{X\} \]

Solution:
Using the definition of the expectation, we can write:

\[\begin{aligned} E\{E\{X|\pmb{Y}\}\}&=\int \limits_{\mathbf y} (\int \limits_{x}xf_{X|\mathbf{Y}}(x|y)dx)f_{\mathbf Y}(y)d\pmb{y} \\ &=\int \limits_{x}x(\int \limits_{\mathbf y}f_{X|\mathbf{Y}}(x|y)dx)f_{\mathbf Y}(\pmb{y})d\pmb{y})dx \\ &=\int \limits_{x}x(\int \limits_{\mathbf y}f_{X|\mathbf{Y}}(x|\mathbf y))f_{\mathbf Y}(\pmb{y})d\pmb{y})dx \\ &=\int \limits_{x} (\int \limits_{\mathbf y}f_{X\mathbf{Y}}(x,\pmb y)dy)dx \\ &=\int \limits_{x} xf(x)dx \\ &=E\{x\} \end{aligned} \]

1.2 (b)

In the following, it is proofed that the optimal predictor \(\hat{X} = A(Y)\) for minimizing the mean square prediction error is given by the conditional mean \(E\{X|\pmb Y\}\). Explain all steps (i to ix) of the proof.

Solution:

\[\begin{aligned} \varepsilon^2&=E\{(X=\hat{X})^2\} \\ (1)&=E\{(X-A(\pmb Y)^2\} \\ (2)&=E\{(X-E\{X|\pmb Y\}+E\{X|\pmb Y\}-A(\pmb Y))^2\} \\ (3)&=E\{(X-E\{X|\pmb Y\})^2\}+(E\{X|\pmb Y\}-A(\pmb Y))^2+2E\{(X-E\{X|\pmb Y\})(E\{X|\pmb Y\}-A(\pmb Y))\} \\ (4)&=E\{(X-E\{X|\pmb Y\})^2\}+(E\{X|\pmb Y\}-A(\pmb Y))^2+2E\{E\{(X-E\{X|\pmb Y\})(E\{X|\pmb Y\}-A(\pmb Y))|\pmb Y\}\} \\ (5)&=E\{(X-E\{X|\pmb Y\})^2\}+(E\{X|\pmb Y\}-A(\pmb Y))^2+2E\{(E\{X|\pmb Y\}-A(\pmb Y))E\{(X-E\{X|\pmb Y\})|\pmb Y\}\} \\ (6)&=E\{(X-E\{X|\pmb Y\})^2\}+(E\{X|\pmb Y\}-A(\pmb Y))^2+2E\{(E\{X|\pmb Y\}-A(\pmb Y))(E\{X|\pmb Y\}-E\{X|\pmb Y\})\} \\ (7)&=E\{(X-E\{X|\pmb Y\})^2\}+(E\{X|\pmb Y\}-A(\pmb Y))^2 \\ (8)&\ge E\{(X-E\{X|\pmb Y\})^2\} \\ (9)&===>opyimal predictor \hat{X}=A(\pmb Y)=E\{X|\pmb Y\} \end{aligned} \]

The following steps are used in the outlined proof:
(i) Express predictor as general function \(\hat{X} = A(Y)\)
(ii) Expression is not modified by adding \(E\{X|Y\} − E\{X|Y\}\)
(iii) \(E\{X|Y\}\) and \(A(Y)\) are deterministic functions of \(Y\)
(iv) Use iterative expectation rule
(v) \(E\{X|Y\}\) and \(A(Y)\) are deterministic functions of \(Y\)
(vi) \(E\{X|Y\}\) is a deterministic function of \(Y\)
(vii) \(E\{X|Y\} − E\{X|Y\}\) is equal to zero
(viii) The first term does not depend on the chosen predictor; the second term is always greater than or equal to zero
(ix) The minimum MSE is obtained if the second term in (vii) is equal to zero, i.e., by the choice \(A(Y) = E\{X|Y\}\)

2. Minimizations for Random Variables

此时,我们的\(y\)是标量。
在这里插入图片描述
定理延申:
Consequently, given an event A, the value that minimizes

\[\min \limits_{y}E\{(X-y)^2|X\in A\} \]

Solution:\(y=E\{X|X\in A\}\)

Reference:source coding

标签:Minimizations,random,mathbf,dx,limits,int,Variables,Random,pmb
来源: https://www.cnblogs.com/a-runner/p/15704472.html