c++ - how to make MPI_Send have processors send in order instead of randomly? -


i trying run program below uses parallel programming. if use 4 processors, want them contain sums 1+2=3, 3+4=7, 11, , 15. want sumvector contain 3, 7, 11, , 15, in order. however, since mpi_send has processors sending in random order, don't sumvector contain, say, 7, 15, 3, 11. how can modify code below ensure this?

#include<iostream> #include<mpi.h>   using namespace std;  int main(int argc, char *argv[]){      int mynode, totalnodes;     int sum,startval,endval,accum;     mpi_status status;     int master=3;       mpi_init(&argc,&argv);     mpi_comm_size(mpi_comm_world, &totalnodes); // totalnodes     mpi_comm_rank(mpi_comm_world, &mynode); // mynode      sum = 0; // 0 sum accumulation     vector <int> sumvector;     startval = 8*mynode/totalnodes+1;     endval = 8*(mynode+1)/totalnodes;      for(int i=startval;i<=endval;i=i+1)         sum=sum+i;         sumvector.push_back(sum);      if(mynode!=master)     {         mpi_send(&sum,1,mpi_int,master,1,mpi_comm_world); //#9, p.92     }     else     {         for(int j=0;j<totalnodes;j=j+1){             if (j!=master)             {                 mpi_recv(&accum,1,mpi_int,j,1,mpi_comm_world, &status);                 printf("processor %d  received %d\n",mynode, j);                 sum = sum + accum;             }         }     } 

am better off using multithreading instead of mpi?

i'm not sure want do, current code equivalent (sans printing number received rank) following one:

for(int i=startval;i<=endval;i=i+1)     sum=sum+i; sumvector.push_back(sum);  mpi_reduce(mynode == master ? mpi_in_place : &sum, &sum, 1, mpi_int,            master, mpi_comm_world); 

what looking either (the result gathered master rank only):

for(int i=startval;i<=endval;i=i+1)     sum=sum+i;  sumvector.resize(totalnodes);  mpi_gather(&sum, 1, mpi_int, &sumvector[0], 1, mpi_int,            master, mpi_comm_world); 

or (results gathered ranks):

for(int i=startval;i<=endval;i=i+1)     sum=sum+i;  sumvector.resize(totalnodes);  mpi_allgather(&sum, 1, mpi_int, &sumvector[0], 1, mpi_int, mpi_comm_world); 

also, following statement entirely wrong:

however, since mpi_send has processors sending in random order, don't sumvector contain, say, 7, 15, 3, 11.

mpi point-to-point communication requires 2 things in order succeed: there must sender executes mpi_send and receiver executes matching mpi_recv. message reception order enforced calling mpi_recv in loop increasing source rank, in code you've shown.


Comments

Popular posts from this blog

css - SVG using textPath a symbol not rendering in Firefox -

Java 8 + Maven Javadoc plugin: Error fetching URL -

order - Notification for user in user account opencart -