Table of Contents
See HBase and MapReduce up in javadocs. Start there. Below is some additional help.
For more information about MapReduce (i.e., the framework in general), see the Hadoop site (TODO: Need good links here -- we used to have some but they rotted against apache hadoop).
When TableInputFormat is used to source an HBase table in a MapReduce job, its splitter will make a map task for each region of the table. Thus, if there are 100 regions in the table, there will be 100 map-tasks for the job - regardless of how many column families are selected in the Scan.
For those interested in implementing custom splitters, see the method
That is where the logic for map-task assignment resides.