<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>Using Persistent Communication in Fortran MPI</title>
  <link rel="alternate" href="https://conferences.xsede.org/c/message_boards/find_recent_posts?p_l_id=" />
  <subtitle>Using Persistent Communication in Fortran MPI</subtitle>
  <entry>
    <title>Using Persistent Communication in Fortran MPI</title>
    <link rel="alternate" href="https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=1562708" />
    <author>
      <name>Julio Cesar Mendez</name>
    </author>
    <id>https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=1562708</id>
    <updated>2017-05-31T13:34:06Z</updated>
    <published>2017-05-31T13:33:19Z</published>
    <summary type="html">Dear community; &lt;br /&gt;I attended the summer bootcamp on 2015. We discussed OpemMP, MPI and OpenACC. &lt;br /&gt;In the boot camp we solved the famous problem of laplace equation. However, I am working with a more complex and bigger problem, a CFD real code. &lt;br /&gt;In this code, I need to update the ghost cells at each time step. Therefore the overhead is quite high. As a result, I have decided to use persistent communication, but my ghost cells are populated with zeroes. &lt;br /&gt;The code runs, but the solution is not right. As I said before, all my ghost cells are populated with zeroes, I have tried different things but nothing has worked out. &lt;br /&gt;I wanted to know if someone has used the persistent communication with the Laplace equation. If so, I will greatly appreciate if you can share the code with me to compare my syntax with a wroking version. Below I show a small part of the code. &lt;br /&gt;&lt;br /&gt;The code, calls the MPI_Subroutine where I set the communication characteristics. &lt;br /&gt; &lt;br /&gt;!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!&lt;br /&gt;!Starting up MPI&lt;br /&gt;call MPI_INIT(ierr)&lt;br /&gt;call MPI_COMM_SIZE(MPI_COMM_WORLD,npes,ierr)&lt;br /&gt;call MPI_COMM_RANK(MPI_COMM_WORLD,MyRank,ierr)&lt;br /&gt;&lt;br /&gt;!Compute the size of local block (1D Decomposition)&lt;br /&gt;Jmax = JmaxGlobal&lt;br /&gt;Imax = ImaxGlobal/npes&lt;br /&gt;if (MyRank.lt.(ImaxGlobal - npes*Imax)) then&lt;br /&gt;  Imax = Imax + 1&lt;br /&gt;end if&lt;br /&gt;if (MyRank.ne.0.and.MyRank.ne.(npes-1)) then&lt;br /&gt;  Imax = Imax + 2&lt;br /&gt;Else&lt;br /&gt;  Imax = Imax + 1&lt;br /&gt;endif&lt;br /&gt;&lt;br /&gt;! Computing neighboars&lt;br /&gt;if (MyRank.eq.0) then&lt;br /&gt;  Left = MPI_PROC_NULL&lt;br /&gt;else&lt;br /&gt;  Left = MyRank - 1&lt;br /&gt;end if&lt;br /&gt;&lt;br /&gt;if (MyRank.eq.(npes -1)) then&lt;br /&gt;  Right = MPI_PROC_NULL&lt;br /&gt;else&lt;br /&gt;  Right = MyRank + 1&lt;br /&gt;end if&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;! Initializing the Arrays in each processor, according to the number of local nodes&lt;br /&gt;Call InitializeArrays&lt;br /&gt;&lt;br /&gt;!Creating the channel of communication for this computation,&lt;br /&gt;!Sending and receiving the u_old (Ghost cells)&lt;br /&gt;Call MPI_SEND_INIT(u_old(2,: ),Jmax,MPI_DOUBLE_PRECISION,Left,tag,MPI_COMM_WORLD,req(1),ierr)&lt;br /&gt;Call MPI_RECV_INIT(u_old(Imax,: ),jmax,MPI_DOUBLE_PRECISION,Right,tag,MPI_COMM_WORLD,req(2),ierr)&lt;br /&gt;Call MPI_SEND_INIT(u_old(Imax-1,: ),Jmax,MPI_DOUBLE_PRECISION,Right,tag,MPI_COMM_WORLD,req(3),ierr)&lt;br /&gt;Call MPI_RECV_INIT(u_old(1,: ),jmax,MPI_DOUBLE_PRECISION,Left,tag,MPI_COMM_WORLD,req(4),ierr)&lt;br /&gt;&lt;br /&gt;End Subroutine MPI_Subroutine&lt;br /&gt;&lt;br /&gt;!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!&lt;br /&gt;&lt;br /&gt;From the main code, where the do loop is, I call the MPI_STARTALL and WaitALL in each time step. &lt;br /&gt;&lt;br /&gt;Call MPI_STARTALL(4,req,ierr) &lt;br /&gt;Call MPI_WAITALL(4,req,status,ierr) &lt;br /&gt;&lt;br /&gt;Req is an array of dimension (4) the same status. &lt;br /&gt;&lt;br /&gt;I am using Fortran 90... Any suggestions and comments? &lt;br /&gt;Thanks before hand</summary>
    <dc:creator>Julio Cesar Mendez</dc:creator>
    <dc:date>2017-05-31T13:33:19Z</dc:date>
  </entry>
</feed>

