其他分享
首页 > 其他分享> > ADMM求解PDE约束优化问题

ADMM求解PDE约束优化问题

作者:互联网

问题描述

给出优化问题:
min ⁡ y , u J ( y , u ) = 1 2 ∫ Ω ( y − y Ω ) 2 + u 2 d x d y + ∫ ∂ Ω e Γ y d s .  s.t.  − Δ y + y = u + e Ω , x ∈ Ω , ∂ y ∂ n = 0 , x ∈ ∂ Ω . 0 ≤ u ≤ 1. \begin{array}{cl} \min _{y,u} & J(y,u) = \frac{1}{2}\int_{\Omega} (y - y_{\Omega})^2 + u^2 dx dy + \int_{\partial \Omega} e_{\Gamma}y ds. \\ \text { s.t. } & - \Delta y + y = u + e_{\Omega},\mathbf{x} \in \Omega, \\{ }& \frac{\partial y}{\partial n} = 0, \mathbf{x} \in \partial \Omega.\\{ }& 0 \leq u \leq 1. \end{array} miny,u​ s.t. ​J(y,u)=21​∫Ω​(y−yΩ​)2+u2dxdy+∫∂Ω​eΓ​yds.−Δy+y=u+eΩ​,x∈Ω,∂n∂y​=0,x∈∂Ω.0≤u≤1.​
给出精确解:
r 2 = ( x 1 − 0.5 ) 2 + ( x 2 − 0.5 ) 2 , Ω = [ 0 , 1 ] 2 r^2 = (x_1 - 0.5)^2 + (x_2 - 0.5)^2,\Omega = [0,1]^2 r2=(x1​−0.5)2+(x2​−0.5)2,Ω=[0,1]2
e Ω ( x 1 , x 2 ) = 1 − min ⁡ { 1 , max ⁡ { 0 , 12 r 2 − 1 / 3 } } , e Γ ( x 1 , x 2 ) = − 12 e_{\Omega}(x_1,x_2) = 1 - \min\{1,\max\{0,12r^2 - 1/3 \}\},e_{\Gamma}(x_1,x_2) = -12 eΩ​(x1​,x2​)=1−min{1,max{0,12r2−1/3}},eΓ​(x1​,x2​)=−12.
y Ω = 12 r 2 − 142 3 y_{\Omega} = 12r^2 - \frac{142}{3} yΩ​=12r2−3142​.
y ∗ = 1 , u ∗ = min ⁡ { 1 , max ⁡ { 0 , 12 r 2 − 1 / 3 } } y^{*} = 1,u^{*} = \min\{1,\max\{0,12r^2 - 1/3 \}\} y∗=1,u∗=min{1,max{0,12r2−1/3}}.
将区域进行均匀剖分,得到 M × N M\times N M×N个网格点,下面将利用传统方法来求解 y , u y,u y,u在这些网格点上的取值.
min ⁡ y , u J ( y , u ) = ( y − y Ω ) T B ( y − y Ω ) + u T B u + c T y .  s.t.  A y − e Ω = u , 0 ≤ u ≤ 1. \begin{array}{cl} \min _{y,u} & J(y,u) = (y-y_{\Omega})^TB(y - y_{\Omega}) + u^TBu+ c^{T}y. \\ \text { s.t. } & Ay - e_{\Omega} = u, \\{ }& 0 \leq u \leq 1. \end{array} miny,u​ s.t. ​J(y,u)=(y−yΩ​)TB(y−yΩ​)+uTBu+cTy.Ay−eΩ​=u,0≤u≤1.​
其中A是由五点差分法得到的,至于这个B特别需要注意,这个B并不是简单的 0.5 ∗ I 0.5*I 0.5∗I
在这里插入图片描述
这里的网格点选取如上,一开始本人选取的是:
J ( u , y ) = 1 2 ∑ i = 0 M − 1 ∑ j = 0 N − 1 ( ( y i , j − y Ω i , j ) 2 + u i , j 2 ) + C T y J(u,y)=\frac{1}{2}\sum_{i=0}^{M-1}\sum_{j=0}^{N-1}((y_{i,j}-y_{\Omega_{i,j}})^2 + u_{i,j}^2) + C^Ty J(u,y)=21​∑i=0M−1​∑j=0N−1​((yi,j​−yΩi,j​​)2+ui,j2​)+CTy
其中 C C C满足在边界点位置为 12 / d x 12/dx 12/dx或者 12 / d y 12/dy 12/dy,然而这样做,前面这个部分并不是在整个区域的积分,就像上面这个图像一样,这个计算的其实多了一部分积分,因此这里我们需要减去一部分多余的,具体的处理刻意参考本人的代码。
另外为了更好计算,本人把约束做了一个改动,两边同时乘了一个 d x ∗ d y = s dx*dy=s dx∗dy=s,最终得到:
min ⁡ y , u J ( y , u ) = ( y − y Ω ) T B ( y − y Ω ) + u T B u + c T y .  s.t.  A y − e Ω s = u s , 0 ≤ u ≤ 1. \begin{array}{cl} \min _{y,u} & J(y,u) = (y-y_{\Omega})^TB(y - y_{\Omega}) + u^TBu+ c^{T}y. \\ \text { s.t. } & Ay - e_{\Omega}s = us, \\{ }& 0 \leq u \leq 1. \end{array} miny,u​ s.t. ​J(y,u)=(y−yΩ​)TB(y−yΩ​)+uTBu+cTy.Ay−eΩ​s=us,0≤u≤1.​
这个问题使用ADMM来求解就非常简单了,
在这里插入图片描述
下面展示一下ADMM求解的结果:

import matplotlib.pyplot as plt
import numpy as np
import time
from matplotlib import cm
def Cholesky(matrix):
    w = matrix.shape[0]
    G = np.zeros((w,w))#实际上只用一半的空间就可以完成矩阵分解
    for i in range(w):
        G[i,i] = (matrix[i,i] - np.dot(G[i,:i],G[i,:i].T))**0.5
        for j in range(i+1,w):
            G[j,i] = (matrix[j,i] - np.dot(G[j,:i],G[i,:i].T))/G[i,i]
    return G
def L_Gauss(A,b):
    x = b.copy()
    x[0,0] = b[0,0]/A[0,0]
    for i in range(1,x.shape[0]):
        x[i,0] = (b[i,0] - (A[i,:i]@x[:i,0]).sum())/A[i,i]
    return x
def U_Gauss(A,b):
    x = b.copy()
    n = x.shape[0]
    x[n - 1,0] = b[n - 1,0]/A[n - 1,n - 1]
    for i in range(n - 2,-1,-1):
        x[i,0] = (b[i,0] - (A[i,i + 1:]*x[i + 1:,0]).sum())/A[i,i]
    return x
def error(u_pred, u_acc):
    u_pred = u_pred
    u_acc = u_acc
    return (((u_pred-u_acc)**2).mean()) ** (0.5)
class FENET():
    def __init__(self,bounds,nx):
        self.dim = 2
        self.bounds = bounds
        self.hx = [(bounds[0,1] - bounds[0,0])/nx[0],
                  (bounds[1,1] - bounds[1,0])/nx[1]]
        self.nx = [nx[0] + 1,nx[1] + 1]
        #-----------
        self.size = self.nx[0]*self.nx[1]
        self.X = np.zeros([self.size,2])
        for i in range(self.nx[0]):
            for j in range(self.nx[1]):
                self.X[i*self.nx[1] + j,0] = bounds[0,0] + i*self.hx[0]
                self.X[i*self.nx[1] + j,1] = bounds[1,0] + j*self.hx[1]
        self.y_Omega = np.zeros([self.size,1])
        self.yy = np.ones([self.size,1])
        self.pp = np.zeros([self.size,1])
        self.uu = np.zeros([self.size,1])
        for i in range(self.size):
            self.pp[i,0] = -12*((self.X[i,0] - 0.5)**2 + (self.X[i,1] - 0.5)**2) + 1/3
            self.y_Omega[i,0] = 12*((self.X[i,0] - 0.5)**2 + (self.X[i,1] - 0.5)**2) - 142/3
            self.uu[i,0] = min(1,max(0,-self.pp[i,0]))
    def matrix(self):
        A = np.zeros([self.nx[0]*self.nx[1],self.nx[0]*self.nx[1]])
        dx = self.hx[0];dy = self.hx[1]
        for i in range(self.nx[0]):
            for j in range(self.nx[1]):
                A[i*self.nx[1]+j,i*self.nx[1]+j] = 2*(dx/dy + dy/dx) + dx*dy
                if i == 0:
                    A[i*self.nx[1] + j,(i + 1)*self.nx[1] + j] = -2*dy/dx
                    if j == 0:
                        A[i*self.nx[1] + j,i*self.nx[1] + j + 1] = -2*dx/dy
                    elif j == self.nx[1] - 1:
                        A[i*self.nx[1] + j,i*self.nx[1] + j - 1] = -2*dx/dy
                    else:
                        A[i*self.nx[1] + j,i*self.nx[1] + j + 1] = -dx/dy
                        A[i*self.nx[1] + j,i*self.nx[1] + j - 1] = -dx/dy
                elif i == self.nx[0] - 1:
                    A[i*self.nx[1] + j,(i - 1)*self.nx[1] + j] = -2*dy/dx
                    if j == 0:
                        A[i*self.nx[1] + j,i*self.nx[1] + j + 1] = -2*dx/dy
                    elif j == self.nx[1] - 1:
                        A[i*self.nx[1] + j,i*self.nx[1] + j - 1] = -2*dx/dy
                    else:
                        A[i*self.nx[1] + j,i*self.nx[1] + j + 1] = -dx/dy
                        A[i*self.nx[1] + j,i*self.nx[1] + j - 1] = -dx/dy
                else:
                    A[i*self.nx[1] + j,(i - 1)*self.nx[1] + j] = -dy/dx
                    A[i*self.nx[1] + j,(i + 1)*self.nx[1] + j] = -dy/dx
                    if j == 0:
                        A[i*self.nx[1] + j,i*self.nx[1] + j + 1] = -2*dx/dy
                    elif j == self.nx[1] - 1:
                        A[i*self.nx[1] + j,i*self.nx[1] + j - 1] = -2*dx/dy
                    else:
                        A[i*self.nx[1] + j,i*self.nx[1] + j + 1] = -dx/dy
                        A[i*self.nx[1] + j,i*self.nx[1] + j - 1] = -dx/dy
        return A
    def mat(self):
        B = 0.5*np.eye(self.nx[0]*self.nx[1],self.nx[0]*self.nx[1])
        for i in range(1,self.nx[0] - 1):
            B[i*self.nx[1],i*self.nx[1]] = 0.5 - 0.25
            B[i*self.nx[1] + self.nx[1] - 1,i*self.nx[1] + self.nx[1] - 1] = 0.5 - 0.25
        for j in range(1,self.nx[1] - 1):
            B[j,j] = 0.5 - 0.25
            B[(self.nx[0] - 1)*self.nx[1] + j,(self.nx[0] - 1)*self.nx[1] + j] = 0.5 - 0.25
        B[0,0] = 0.5 - 3/8
        B[self.nx[1] - 1,self.nx[1] - 1] = 0.5 - 3/8
        B[(self.nx[0] - 1)*self.nx[1],(self.nx[0] - 1)*self.nx[1]] = 0.5 - 3/8
        B[-1,-1] = 0.5 - 3/8
        return B
    def E_Omega(self):
        r = (self.X[:,0] - 0.5)**2 + (self.X[:,1] - 0.5)**2  
        tmp = np.maximum(12*r - 1/3,0)
        out = 1 - np.minimum(tmp,1)
        return out.reshape(-1,1)
    
    def right(self):
        
        c1 = np.zeros([self.size,1])
        c2 = np.zeros([self.size,1])
        for i in range(self.nx[0]):
            c1[i*self.nx[1],0] = -12*(self.nx[1] - 1)
            c1[i*self.nx[1] + self.nx[1] - 1,0] = -12*(self.nx[1] - 1)
        for j in range(1,self.nx[1] - 1):
            c2[j,0] = -12*(self.nx[0] - 1)
            c2[(self.nx[0] - 1)*self.nx[1] + j,0] = -12*(self.nx[0] - 1)
        return c1 + c2


class OPTS():
    def __init__(self,fenet):
        self.epoch = 300#最大迭代次数
        self.tau = 1e-4#惩罚参数的倒数的两倍
        self.eps = 1e-15#停机条件
        self.yy = fenet.yy
        self.uu = fenet.uu
        self.s = fenet.hx[0]*fenet.hx[1]
        self.A = fenet.matrix()
        self.m = self.A.shape[0]
        self.n = self.A.shape[1]
        self.B = fenet.mat()
        #self.B = 0.5*np.eye(self.m,self.n)
        self.e_Omega = fenet.E_Omega()
        self.c = fenet.right()
        self.y_Omega = fenet.y_Omega
        self.Ly = Cholesky(self.B + 0.5*self.A.T@self.A/self.tau)#储存cholesky分解矩阵
        self.Lu = Cholesky(self.B + np.identity(self.m)*0.5*self.s**2/self.tau)
        self.y0 = np.random.randn(self.n,1)#储存初始迭代向量
        #self.y0 = np.ones([self.n,1])
        self.time = 0#储存运行时间
        self.value = 0#储存最优函数值
        self.y_error = 0#储存最终的误差
        self.u_error = 0#储存最终的误差
        self.epoch_end = 0#储存最终迭代次数
def value(y,y_Omega,c,u,B):
    area = 1/y.shape[0]
    tmp = (y - y_Omega).T@B@(y - y_Omega) + u.T@B@u 
    return (tmp[0,0] + (c*y).sum())*area
def y_iter(tau,A,B,Ly,y_Omega,u,c,alpha,e_Omega,s):
    b = B@y_Omega - 0.5*c + 0.5*A.T@(e_Omega*s + u*s - alpha*tau)/tau
    x1 = L_Gauss(Ly,b)
    return U_Gauss(Ly.T,x1)

def u_iter(tau,alpha,A,B,Lu,y,e_Omega,s):
    b = s*alpha + (A@y - e_Omega)*s**2/tau
    x1 = L_Gauss(Lu,0.5*b)
    z = U_Gauss(Lu.T,x1)
    z[z > 1] = 1
    z[z < 0] = 0
    return z
def ADMM(opts):
    tic = time.time()
    y_old = opts.y0.copy()
    u_old = opts.y0.copy()
    alpha_old = opts.y0.copy()
    for i in range(opts.epoch):
        y_new = y_iter(opts.tau,opts.A,opts.B,opts.Ly,opts.y_Omega,u_old,opts.c,alpha_old,opts.e_Omega,opts.s)
        u_new = u_iter(opts.tau,alpha_old,opts.A,opts.B,opts.Lu,y_new,opts.e_Omega,opts.s)
        alpha_new = alpha_old + (opts.A@y_new - u_new*opts.s - opts.e_Omega*opts.s)/opts.tau
        rho = np.linalg.norm(y_new - y_old) + np.linalg.norm(u_new - u_old)
        if (i + 1)%50 == 0:
            #err = opts.A@y_new - u_new*opts.s - opts.e_Omega*opts.s
            #print(np.linalg.norm(err))
            val = value(y_new,opts.y_Omega,opts.c,u_new,opts.B)
            print('epcoh:%d,norm:%.3e,y_error:%.3e,u_error:%.3e,val:%.4e'
                  %(i + 1,rho,error(y_new,opts.yy),error(u_new,opts.uu),val))
        if rho < opts.eps:
            break
        else:
            y_old = y_new
            u_old = u_new
            alpha_old = alpha_new
        
    val = value(y_new,opts.y_Omega,opts.c,u_new,opts.B)
    ela = time.time() - tic
    opts.value = val
    opts.time = ela
    opts.y_error = error(y_new,opts.yy)
    opts.u_error = error(u_new,opts.uu)
    opts.epoch_end = i + 1
    return y_new,u_new,opts
bounds = np.array([[0,1],[0,1.0]])
nx_te = [20,20]#训练集剖分
fenet = FENET(bounds,nx_te)
opts = OPTS(fenet)

print(value(opts.yy,opts.y_Omega,opts.c,opts.uu,opts.B))
err = fenet.matrix()@opts.yy - opts.uu*opts.s - opts.e_Omega*opts.s
print(np.linalg.norm(err))
y_new,u_new,out = ADMM(opts)
print(np.linalg.norm(fenet.matrix()@y_new - u_new*opts.s - opts.e_Omega*opts.s))
#print(y_new)
print('the epoch:%d,y_error:%.3e,u_error:%.3e,the value:%.4e,the time:%.2f'\
      %(out.epoch_end,out.y_error,out.u_error,out.value,out.time))
fig, ax = plt.subplots(2,3,figsize=(12,7))
ax = ax.reshape((2,3))
for i in range(2):
    for j in range(3):
        ax[i,j].axis('equal')
        ax[i,j].set_xlim([0,1])
        ax[i,j].set_ylim([0,1])
        ax[i,j].axis('off')
        
norm1 = cm.colors.Normalize(vmin=0, vmax=1)
norm2 = cm.colors.Normalize(vmin=0.9, vmax=1.1)

num_line = 10
x = np.linspace(bounds[0,0],bounds[0,1],nx_te[0] + 1)
y = np.linspace(bounds[1,0],bounds[1,1],nx_te[1] + 1)
X,Y = np.meshgrid(x,y)

u = u_new.reshape(nx_te[0] + 1,nx_te[1] + 1)
y = y_new.reshape(nx_te[0] + 1,nx_te[1] + 1)
u_e = opts.uu.reshape(nx_te[0] + 1,nx_te[1] + 1)
y_e = opts.yy.reshape(nx_te[0] + 1,nx_te[1] + 1)

ax00 = ax[0,0].contourf(X, Y, u, num_line, alpha=1, cmap='rainbow')
ax[0,0].contour(ax00, linewidths=0.6, colors='black')
ax01 = ax[0,1].contourf(X, Y, u_e, num_line, alpha=1, cmap='rainbow')
ax[0,1].contour(ax01, linewidths=0.6, colors='black')

ax02 = ax[0,2].contourf(X, Y, u-u_e, num_line, alpha=1, cmap='rainbow')
ax[0,2].contour(ax02, linewidths=0.6, colors='black')
fig.colorbar(ax00,ax=ax[0,0])
fig.colorbar(ax01,ax=ax[0,1])
fig.colorbar(ax02,ax=ax[0,2])
ax[0,0].set_title('numerical: u')
ax[0,1].set_title('exact: u')
ax[0,2].set_title('difference: u')

ax10 = ax[1,0].contourf(X, Y, y, num_line, alpha=1, cmap='rainbow')
ax[1,0].contour(ax10, linewidths=0.6, colors='black')
ax11 = ax[1,1].contourf(X, Y, y_e, num_line, alpha=1, cmap='rainbow')
ax[1,1].contour(ax11, linewidths=0.6, colors='black')
ax12 = ax[1,2].contourf(X, Y, y-y_e, num_line, alpha=1, cmap='rainbow')
ax[1,2].contour(ax12, linewidths=0.6, colors='black')
fig.colorbar(ax10,ax=ax[1,0])
fig.colorbar(ax11,ax=ax[1,1])
fig.colorbar(ax12,ax=ax[1,2])
ax[1,0].set_title('numerical: y')
ax[1,1].set_title('exact: y')
ax[1,2].set_title('difference: y')
fig.tight_layout()  # 防止子图重叠


plt.show()
fig.savefig('diff200-6.6.png',dpi=300)

在这里插入图片描述
在这里插入图片描述

Galerkin

使用神经网络来处理,这个本人就介绍一下Galerkin的损失函数,代码不打算放出。
L o b j = 1 N ∑ i = 1 N J ( u ^ i , y ^ i ) L p d e = 1 N ∑ k = 1 K ∑ i = 1 N ( ( ∇ y ^ i ∗ ∇ v i k ) − ( u ^ i + e Ω , i − y ^ i ) v i k ) 2 + 1 M ∑ j = 1 M ( ∂ ν y ^ j ) 2 L i u = 1 N ∑ i = 1 N ( Relu ⁡ ( − u ^ i ) 2 + Relu ⁡ ( u ^ i − 1 ) 2 ) L = L o b j + α L p d e + β L i u \begin{aligned} L_{o b j} &=\frac{1}{N} \sum_{i=1}^{N} J\left(\hat{u}_{i}, \hat{y}_{i}\right) \\ L_{p d e} &=\frac{1}{N} \sum_{k = 1}^{K}\sum_{i=1}^{N}\left((\nabla \hat{y}_{i}*\nabla v_{i}^k)-(\hat{u}_{i}+e_{\Omega, i} - \hat{y}_{i})v_{i}^k \right)^{2}+\\ & \frac{1}{M} \sum_{j=1}^{M}\left(\partial_{\nu} \hat{y}_{j})^2\right.\\ L_{i u} &=\frac{1}{N} \sum_{i=1}^{N}\left(\operatorname{Relu}\left(-\hat{u}_{i}\right)^{2}+\operatorname{Relu}\left(\hat{u}_{i}-1\right)^{2}\right) \\ L &=L_{o b j}+\alpha L_{p d e}+\beta L_{i u} \end{aligned} Lobj​Lpde​Liu​L​=N1​i=1∑N​J(u^i​,y^​i​)=N1​k=1∑K​i=1∑N​((∇y^​i​∗∇vik​)−(u^i​+eΩ,i​−y^​i​)vik​)2+M1​j=1∑M​(∂ν​y^​j​)2=N1​i=1∑N​(Relu(−u^i​)2+Relu(u^i​−1)2)=Lobj​+αLpde​+βLiu​​
下面是Galerkin的效果,大家自己写代码吧。在这里插入图片描述

标签:ADMM,求解,self,0.5,nx,ax,PDE,Omega,opts
来源: https://blog.csdn.net/forrestguang/article/details/119335378