c – Openmpi mpmd获取通信大小
作者:互联网
我有两个openmpi程序,我这样开始
mpirun -n 4 ./prog1 : -n 2 ./prog2
现在我如何使用MPI_Comm_size(MPI_COMM_WORLD,& size)使得我得到大小值为
prog1 size=4
prog2 size=2.
到目前为止,我在两个节目中都获得了“6”.
解决方法:
这是可行的,尽管有点麻烦.原则是根据argv [0]的值将MPI_COMM_WORLD拆分为通信器,其中包含可执行文件的名称.
那可能是这样的:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <mpi.h>
int main( int argc, char *argv[] ) {
MPI_Init( &argc, &argv );
int wRank, wSize;
MPI_Comm_rank( MPI_COMM_WORLD, &wRank );
MPI_Comm_size( MPI_COMM_WORLD, &wSize );
int myLen = strlen( argv[0] ) + 1;
int maxLen;
// Gathering the maximum length of the executable' name
MPI_Allreduce( &myLen, &maxLen, 1, MPI_INT, MPI_MAX, MPI_COMM_WORLD );
// Allocating memory for all of them
char *names = malloc( wSize * maxLen );
// and copying my name at its place in the array
strcpy( names + ( wRank * maxLen ), argv[0] );
// Now collecting all executable' names
MPI_Allgather( MPI_IN_PLACE, 0, MPI_DATATYPE_NULL,
names, maxLen, MPI_CHAR, MPI_COMM_WORLD );
// With that, I can sort-out who is executing the same binary as me
int binIdx = 0;
while( strcmp( argv[0], names + binIdx * maxLen ) != 0 ) {
binIdx++;
}
free( names );
// Now, all processes with the same binIdx value are running the same binary
// I can split MPI_COMM_WORLD accordingly
MPI_Comm binComm;
MPI_Comm_split( MPI_COMM_WORLD, binIdx, wRank, &binComm );
int bRank, bSize;
MPI_Comm_rank( binComm, &bRank );
MPI_Comm_size( binComm, &bSize );
printf( "Hello from process WORLD %d/%d running %d/%d %s binary\n",
wRank, wSize, bRank, bSize, argv[0] );
MPI_Comm_free( &binComm );
MPI_Finalize();
return 0;
}
在我的机器上,我编译并运行如下:
~> mpicc mpmd.c
~> cp a.out b.out
~> mpirun -n 3 ./a.out : -n 2 ./b.out
Hello from process WORLD 0/5 running 0/3 ./a.out binary
Hello from process WORLD 1/5 running 1/3 ./a.out binary
Hello from process WORLD 4/5 running 1/2 ./b.out binary
Hello from process WORLD 2/5 running 2/3 ./a.out binary
Hello from process WORLD 3/5 running 0/2 ./b.out binary
理想情况下,如果存在用于通过二进制文件排序的相应类型,则可以通过使用MPI_Comm_split_type()来大大简化此操作.不幸的是,3.1 MPI标准中没有预先定义的MPI_COMM_TYPE_.唯一预定义的是MPI_COMM_TYPE_SHARED,用于在同一共享内存计算节点上运行的进程之间进行排序……太糟糕了!对于下一版本的标准,可能需要考虑一些事项?
标签:c-3,openmpi,c,parallel-processing,mpi 来源: https://codeday.me/bug/20190927/1824556.html