Chapter 7. HBase and MapReduce

Table of Contents

7.1. Map-Task Splitting
7.1.1. The Default HBase MapReduce Splitter
7.1.2. Custom Splitters
7.2. HBase MapReduce Examples
7.2.1. HBase MapReduce Read Example
7.2.2. HBase MapReduce Read/Write Example
7.2.3. HBase MapReduce Read/Write Example With Multi-Table Output
7.2.4. HBase MapReduce Summary to HBase Example
7.2.5. HBase MapReduce Summary to File Example
7.2.6. HBase MapReduce Summary to HBase Without Reducer
7.2.7. HBase MapReduce Summary to RDBMS
7.3. Accessing Other HBase Tables in a MapReduce Job
7.4. Speculative Execution

See HBase and MapReduce up in javadocs. Start there. Below is some additional help.

For more information about MapReduce (i.e., the framework in general), see the Hadoop site (TODO: Need good links here -- we used to have some but they rotted against apache hadoop).

Notice to Mapreduce users of HBase 0.96.1 and above

Some mapreduce jobs that use HBase fail to launch. The symptom is an exception similar to the following:

Exception in thread "main" java.lang.IllegalAccessError: class cannot access its superclass
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(
    at Method)
    at java.lang.ClassLoader.loadClass(
    at java.lang.ClassLoader.loadClass(

This is because of an optimization introduced in HBASE-9867 that inadvertently introduced a classloader dependency.

This affects both jobs using the -libjars option and "fat jar," those which package their runtime dependencies in a nested lib folder.

In order to satisfy the new classloader requirements, hbase-protocol.jar must be included in Hadoop's classpath. This can be resolved system-wide by including a reference to the hbase-protocol.jar in hadoop's lib directory, via a symlink or by copying the jar into the new location.

This can also be achieved on a per-job launch basis by including it in the HADOOP_CLASSPATH environment variable at job submission time. When launching jobs that package their dependencies, all three of the following job launching commands satisfy this requirement:

$ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass
$ HADOOP_CLASSPATH=$(hbase mapredcp):/path/to/hbase/conf hadoop jar MyJob.jar MyJobMainClass
$ HADOOP_CLASSPATH=$(hbase classpath) hadoop jar MyJob.jar MyJobMainClass

For jars that do not package their dependencies, the following command structure is necessary:

$ HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp.jar MyJobMainClass -libjars $(hbase mapredcp | tr ':' ',') ...

See also HBASE-10304 for further discussion of this issue.

7.1. Map-Task Splitting

7.1.1. The Default HBase MapReduce Splitter

When TableInputFormat is used to source an HBase table in a MapReduce job, its splitter will make a map task for each region of the table. Thus, if there are 100 regions in the table, there will be 100 map-tasks for the job - regardless of how many column families are selected in the Scan.

7.1.2. Custom Splitters

For those interested in implementing custom splitters, see the method getSplits in TableInputFormatBase. That is where the logic for map-task assignment resides.

comments powered by Disqus