其他分享
首页 > 其他分享> > Fortran并行循环体MPI库

Fortran并行循环体MPI库

作者:互联网

Fortran并行循环

机器系统为ubuntu18.04LTS,fortran编译器为intel的oneAPI,并行策略为fortran90+MPI库,并行3个CPU
以下三个hello world程序选择3种不同的循环策略

  1. 每个CPU都遍历三次循环
  2. 三个CPU分担三次循环,即每个CPU遍历一次循环
  3. 使用subroutine+全局变量实现更加复杂的多次循环
    shell中的编译命令如下:
#compile
mpiifort -g -DMPI_DEBUG hello_world.f -o z.out ##add "-g -DMPI_DEBUG" to debug with GDB
#run
mpirun -np 3 ./z.out  #choose 3 cores
#debug
mpirun -np 3 xterm -e gdb ./z.out #need install xterm

每个CPU都遍历3次循环

c test parallel program 
      program main
      use mpi
      integer::ICORE,NCORE,IERR,MASTER
c start parallel-computation and assign the master core
      CALL MPI_INIT( IERR )
      CALL MPI_COMM_RANK(MPI_COMM_WORLD,ICORE,IERR)
      CALL MPI_COMM_SIZE(MPI_COMM_WORLD,NCORE,IERR)
      CALL MPI_BARRIER(MPI_COMM_WORLD,IERR)
c loop begin
      do 1 i  = 1,3
      write(*,'(1x,i2,a,i2,a5,i1)') ICORE,'/',NCORE,'LOOP=',i
1     continue
      CALL MPI_BARRIER(MPI_COMM_WORLD,IERR)
c exit CPUs
      call MPI_FINALIZE ( IERR )
      end program

3个CPU分担3次循环(每个CPU一次循环)

c test parallel program 
      program main
      use mpi
      integer::ICORE,NCORE,IERR,MASTER
c start parallel-computation and assign the master core
      CALL MPI_INIT( IERR )
      CALL MPI_COMM_RANK(MPI_COMM_WORLD,ICORE,IERR)
      CALL MPI_COMM_SIZE(MPI_COMM_WORLD,NCORE,IERR)
      CALL MPI_BARRIER(MPI_COMM_WORLD,IERR)
c loop begin
      do 1 i  = 1,3
      icyes   = MOD(i,3) 
      if(icyes.eq.icore) then 
      write(*,'(1x,i2,a,i2,a5,i1)') ICORE,'/',NCORE,'LOOP=',i
      endif
1     continue
      CALL MPI_BARRIER(MPI_COMM_WORLD,IERR)
c exit CPUs
      call MPI_FINALIZE ( IERR )
      end program

使用subroutine+全局变量实现更加复杂的多次循环

c test parallel program 
      program main
      use mpi
      !INCLUDE "mpif.h"
      COMMON/PARALLEL/ICORE,NCORE,IERR
c start parallel-computation and assign the master core
      CALL MPI_INIT( IERR )
      CALL MPI_COMM_RANK(MPI_COMM_WORLD,ICORE,IERR)
      CALL MPI_COMM_SIZE(MPI_COMM_WORLD,NCORE,IERR)
      CALL MPI_BARRIER(MPI_COMM_WORLD,IERR)

c step-01 loop output core number
      print*,"-------This is the first assignment-------"
      do 1 i  = 1,3
      icyes   = MOD(i,3)
      if(icyes.eq.icore) then 
      write(*,'(1x,i2,a,i2,a5,i1)') ICORE,'/',NCORE,'LOOP=',i
      endif 
1     continue
      CALL MPI_BARRIER(MPI_COMM_WORLD,IERR)

c step-02 loop subrountine
      print*,"-------This is the second assignment-------"
      do 2 i=1,3
      CALL hello_world(i)
2     continue
      CALL MPI_BARRIER(MPI_COMM_WORLD,IERR)

c exit CPUs
      PRINT*,"CPU:",ICORE," of ",NCORE," ENDS"
      call MPI_FINALIZE ( IERR )
      end program

c output hello world
      subroutine hello_world(i)
      COMMON/PARALLEL/ICORE,NCORE,IERR,MASTER
      icyes   = MOD(i,ncore)
      if(icyes.eq.icore) then 
      write(*,100) ICORE,NCORE,i
      endif
100   format('Hello World',X,I4,X,I4,X,I4)
      return 
      end

标签:循环体,NCORE,IERR,MPI,Fortran,COMM,WORLD,CALL
来源: https://www.cnblogs.com/liangxuran/p/16275465.html