<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <title>GC Overhead limit exceeded error in Hadoop</title>
  <link rel="alternate" href="https://conferences.xsede.org/c/message_boards/find_recent_posts?p_l_id=" />
  <subtitle>GC Overhead limit exceeded error in Hadoop</subtitle>
  <entry>
    <title>Running MapReduce on multicore in hadoop 2.6</title>
    <link rel="alternate" href="https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=2804194" />
    <author>
      <name>Enamul Karim</name>
    </author>
    <id>https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=2804194</id>
    <updated>2021-09-07T21:27:08Z</updated>
    <published>2021-09-07T21:27:08Z</published>
    <summary type="html">I am trying to run a MapReduce application in a single node machine and utilize its multicore to act as mapper and reducer. I am setting it in the following way:&lt;br /&gt;&lt;br /&gt;Configuration conf = new Configuration();&lt;br /&gt;conf.set(&amp;#034;mapreduce.tasktracker.map.tasks.maximum&amp;#034;, String.valueOf(NO_OF_MAPPERS));&lt;br /&gt;conf.set(&amp;#034;mapreduce.tasktracker.reduce.tasks.maximum&amp;#034;, String.valueOf(NO_OF_REDUCERS));&lt;br /&gt;&lt;br /&gt;I am not sure whether I am doing it right because I don&amp;#039;t see any performance increase by using a higher number of mappers and reducers. Can anyone help me? Any help is highly appreciated.</summary>
    <dc:creator>Enamul Karim</dc:creator>
    <dc:date>2021-09-07T21:27:08Z</dc:date>
  </entry>
  <entry>
    <title>GC Overhead limit exceeded error in Hadoop</title>
    <link rel="alternate" href="https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=2523438" />
    <author>
      <name>Enamul Karim</name>
    </author>
    <id>https://conferences.xsede.org/c/message_boards/find_message?p_l_id=&amp;messageId=2523438</id>
    <updated>2020-07-30T09:13:38Z</updated>
    <published>2020-07-30T09:05:57Z</published>
    <summary type="html">I am working on a Hadoop Map/Reduce application. In one of the experiments with a large data set, I got an error saying that GC overhead limit exceeded. Can anyone tell me what could be the reason and how can I solve it in Hadoop?</summary>
    <dc:creator>Enamul Karim</dc:creator>
    <dc:date>2020-07-30T09:05:57Z</dc:date>
  </entry>
</feed>

