<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>how to run MPI + OpenMP batch</title>
  <link rel="alternate" href="https://conferences.xsede.org/c/message_boards/find_recent_posts?p_l_id=" />
  <subtitle>how to run MPI + OpenMP batch</subtitle>
  <entry>
    <title>RE: how to run MPI + OpenMP batch</title>
    <link rel="alternate" href="https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=657097" />
    <author>
      <name>Lester Ingber</name>
    </author>
    <id>https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=657097</id>
    <updated>2014-02-03T19:33:18Z</updated>
    <published>2014-02-03T19:32:23Z</published>
    <summary type="html">I just had a chance to test my new scripts.  There are few corrections in the attached zipfile (containing three scripts) from my previous posting.&lt;br /&gt;&lt;br /&gt;Lester</summary>
    <dc:creator>Lester Ingber</dc:creator>
    <dc:date>2014-02-03T19:32:23Z</dc:date>
  </entry>
  <entry>
    <title>RE: how to run MPI + OpenMP batch</title>
    <link rel="alternate" href="https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=655579" />
    <author>
      <name>Lester Ingber</name>
    </author>
    <id>https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=655579</id>
    <updated>2014-01-31T02:24:16Z</updated>
    <published>2014-01-31T02:24:16Z</published>
    <summary type="html">Jeff:&lt;br /&gt; &lt;br /&gt;Thanks for your reply.  I got some concrete answers from Glenn Lockwood in XSEDE Support.  He recommends splitting the total jobs into pieces.  The final draft I have is attached in three short files {bundler.pl,  xxsede_mpi_openmp_master.csh,  xxsede_mpi_openmp_template.csh}  contained in mpi_open.zip.&lt;br /&gt;&lt;br /&gt;Lester</summary>
    <dc:creator>Lester Ingber</dc:creator>
    <dc:date>2014-01-31T02:24:16Z</dc:date>
  </entry>
  <entry>
    <title>how to run MPI + OpenMP batch</title>
    <link rel="alternate" href="https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=652531" />
    <author>
      <name>Lester Ingber</name>
    </author>
    <id>https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=652531</id>
    <updated>2014-01-26T23:49:17Z</updated>
    <published>2014-01-25T16:05:52Z</published>
    <summary type="html">Without complete docs on mpirun and ibrun, I still am not sure how to submit new jobs on trestle?&lt;br /&gt;&lt;br /&gt;I have successfully run jobs under MPI with a script:&lt;br /&gt;#!/bin/tcsh -xv&lt;br /&gt;# qsub xxsede_multiple.csh &amp;gt;&amp;amp;! tp.log_all&lt;br /&gt;#PBS -q normal&lt;br /&gt;#PBS -A TG-PHY130022&lt;br /&gt;#PBS -l nodes=4:ppn=30&lt;br /&gt;#PBS -l walltime=25:00:00&lt;br /&gt;#PBS -o xsede_output&lt;br /&gt;#PBS -N cmi_eeg&lt;br /&gt;#PBS -V&lt;br /&gt;cd $PBS_O_WORKDIR&lt;br /&gt;...&lt;br /&gt;mpirun_rsh -np 120 -hostfile $PBS_NODEFILE MV2_ENABLE_AFFINITY=0 {$PWD_WORK}/CMI_EEG/bundler.pl  {$PWD_WORK}/CMI_EEG/tasks&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;However, now, each of the 120 MPI processes/runs/fits (using my ASA optimization code) ALSO can run independently in parallel under OpenMP.  I understand that each ASA run can spawn of up to 16 threads, e.g., 120x16 MPI+OpenMP processses could run together.  How do I submit this?  What changes need to be made in the above script?  Is 512 the maximum that can be spawned at any time; if so, then I likely would need 4 separate aggregate runs of 30x16 each, etc.&lt;br /&gt;&lt;br /&gt;Alternatively, I was advised to run batch scripts using ibrun.  I assume that this means I would use ibrun INSTEAD of mpirun?  If so, how do I pass the same parameters + #PBS settings as I have been using under mpirun?&lt;br /&gt;&lt;br /&gt;Thanks.&lt;br /&gt;&lt;br /&gt;Lester&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;P.S.:&lt;br /&gt;I do see quite a bit of info on such hybrid runs on&lt;br /&gt;https://www.cac.cornell.edu/ranger/Hybrid/smpnodes.aspx&lt;br /&gt;but I would like some definitive answers on the use of my previous mpi script to expand into the hyrbrid runs on Tresles.  So far, the info I have received does not seem to jive with the kinds of codes/examples given in the cornell tutorial.&lt;br /&gt;&lt;br /&gt;I&amp;#039;d be just as happy to run a top-level MPI-C code that would call ASA-C codes, each ASA-C run set up as now to use OpenMP.  I&amp;#039;d just like to get an example that will run on Trestles.</summary>
    <dc:creator>Lester Ingber</dc:creator>
    <dc:date>2014-01-25T16:05:52Z</dc:date>
  </entry>
</feed>

