<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>ibrun command line inputs / python-managed sequential parallel jobs</title>
  <link rel="alternate" href="https://conferences.xsede.org/c/message_boards/find_recent_posts?p_l_id=" />
  <subtitle>ibrun command line inputs / python-managed sequential parallel jobs</subtitle>
  <entry>
    <title>RE: ibrun command line inputs / python-managed sequential parallel jobs</title>
    <link rel="alternate" href="https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=1118669" />
    <author>
      <name>Heather Louise Kline</name>
    </author>
    <id>https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=1118669</id>
    <updated>2016-01-11T23:04:42Z</updated>
    <published>2016-01-11T23:04:42Z</published>
    <summary type="html">Update:&lt;br /&gt;&amp;#039;ibrun -o 0 -n %i %s&amp;#039;&lt;br /&gt;solves this problem. &lt;br /&gt; where %i and %s are the number of processors and the command string provided later. The -o option specifies the offset for the processors, and in this context the offset is always 0. &lt;br /&gt;&lt;br /&gt;I will be updating our open-source code (SU2) so that this is used automatically on TACC machines in case someone else needs to use our code on XSEDE.</summary>
    <dc:creator>Heather Louise Kline</dc:creator>
    <dc:date>2016-01-11T23:04:42Z</dc:date>
  </entry>
  <entry>
    <title>ibrun command line inputs / python-managed sequential parallel jobs</title>
    <link rel="alternate" href="https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=1108040" />
    <author>
      <name>Heather Louise Kline</name>
    </author>
    <id>https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=1108040</id>
    <updated>2015-12-17T04:22:33Z</updated>
    <published>2015-12-17T04:21:11Z</published>
    <summary type="html">I have a python script (which works well on other systems) that launches a number of parallel jobs sequentially - any recommendations on launching slurm jobs from within python scripts? Especially regarding doing it in the right way for the stampede system?&lt;br /&gt;&lt;br /&gt;It is an optimization process, where the optimizer is serial and the function calls are parallel, so it&amp;#039;s not feasible to launch each parallel simulation manually. &lt;br /&gt;&lt;br /&gt;I have tried &amp;#039;ibrun -n %i %s&amp;#039; where the inputs are provided later in the script; however this produced an error about not providing a -o command. &lt;br /&gt;&lt;br /&gt;Has anyone tried something similar to this, and/or can recommend the correct syntax?&lt;br /&gt;This process previously worked with &amp;#039;srun -n %i %s&amp;#039; on a different system, so I think it is just the ibrun inputs I need help with.</summary>
    <dc:creator>Heather Louise Kline</dc:creator>
    <dc:date>2015-12-17T04:21:11Z</dc:date>
  </entry>
</feed>

