其他分享
首页 > 其他分享> > Monte Carlo Integration

Monte Carlo Integration

作者:互联网

Monte Carlo Integration

Monte Carlo integration uses a different perspective from Quadrature Integration to consider the problem of integration. Quadrature Integration from discrete to continuous, mainly uses the concept of limit convergence and continuous. Monte Carlo integral extended sampling and mathematical expectation of random variables.

Probability Background

Cumulative Distributions and Density Functions

The cumulative distribution function, or CDF, of a random variable \(X\) is the probability that a value chosen from the variable’s distribution is less than or equal to some thresold \(x\):

\[cdf(X) = Pr\{X \leq x \} \]

The corresponding probability density function, or PDF, is the derivative of the CDF:

\[pdf\left( x \right) \ =\ \frac{d}{d_x}cdf\left( x \right) \]

and we can calculate the probility within an interval:

\[Pr\left\{ a\le X\le b \right\} \ =\ \int_a^b{pdf\left( x \right) dx} \]

Expected Values and Variance

The expected value or expectation of a random variable \(Y = f(x)\) over a domain \(\mu(x)\) is defined as:

\[E\left[ Y \right] \ =\ \int_{\mu \left( x \right)}{f\left( x \right) \cdot pdf\left( x \right) d\mu \left( x \right)} \]

while its variance is:

\[\sigma ^2\left[ Y \right] \ =\ E\left[ \left( Y-E\left[ Y \right] \right) ^2 \right] \]

The basic knowledge of probability statistics is this, and we will explain it when we use if have others

TheMonte Carlo Estimator

The Basic Estimator

Monte Carlo integration uses random sampling of a function to numerically compute an estimate of its integral. Suppose that we want to integrate the one-dimensional function \(f (x)\) from \(a\) to \(b\):

\[F\ =\ \int_a^b{f\left( x \right) dx} \]

We can approximate this integral by averaging samples of the function \(f\) at uniform random points within the interval. Given a set of \(N\) uniform random variables \(X_i\in \left[ a,b \right)\) with a corresponding PDF of \(1/(b-a)\), theMonte Carlo estimator for computing \(F\) is:

\[\left\langle F^{N}\right\rangle=(b-a) \frac{1}{N-1} \sum_{i=0}^{N} f\left(X_{i}\right) \tag{1} \]

I think using an example directly, with \(N=4\) and interval \([a,b)\), using uniformly distributed probability density, using random sampling can explain this formula very well.

1591972482043

We use random sampling to get the value of random variable as shown above, as \(f(X_0)\), \(f(X_1)\), \(f(X_2)\), \(f(X_3)\).

Then we use the following picture to simulate integration.

1591972616144

But this integration process is a combination of sampling and statistical calculation of digital characteristics, which is Expected Values. This process is the process of formula (1). If we extend the number 4 to \(N\), we get the result of formula(1).

Expected Value and Convergence

我们怎样证明

标签:Monte,right,random,Integration,integration,Carlo,left
来源: https://www.cnblogs.com/wevolf/p/13110971.html