Class CompactionTool.CompactionInputFormat
java.lang.Object
org.apache.hadoop.mapreduce.InputFormat<K,V>
 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text>
 
org.apache.hadoop.mapreduce.lib.input.TextInputFormat
org.apache.hadoop.hbase.regionserver.CompactionTool.CompactionInputFormat
- Enclosing class:
- CompactionTool
private static class CompactionTool.CompactionInputFormat
extends org.apache.hadoop.mapreduce.lib.input.TextInputFormat
Input format that uses store files block location as input split locality.
- 
Nested Class SummaryNested classes/interfaces inherited from class org.apache.hadoop.mapreduce.lib.input.FileInputFormatorg.apache.hadoop.mapreduce.lib.input.FileInputFormat.Counter
- 
Field SummaryFields inherited from class org.apache.hadoop.mapreduce.lib.input.FileInputFormatDEFAULT_LIST_STATUS_NUM_THREADS, INPUT_DIR, INPUT_DIR_RECURSIVE, LIST_STATUS_NUM_THREADS, NUM_INPUT_FILES, PATHFILTER_CLASS, SPLIT_MAXSIZE, SPLIT_MINSIZE
- 
Constructor SummaryConstructors
- 
Method SummaryModifier and TypeMethodDescriptionstatic List<org.apache.hadoop.fs.Path>createInputFile(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.FileSystem stagingFs, org.apache.hadoop.fs.Path path, Set<org.apache.hadoop.fs.Path> toCompactDirs) Create the input file for the given directories to compact.List<org.apache.hadoop.mapreduce.InputSplit>getSplits(org.apache.hadoop.mapreduce.JobContext job) Returns a split for each store files directory using the block location of each file as locality reference.private static String[]getStoreDirHosts(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path) return the top hosts of the store files, used by the Splitprotected booleanisSplitable(org.apache.hadoop.mapreduce.JobContext context, org.apache.hadoop.fs.Path file) Methods inherited from class org.apache.hadoop.mapreduce.lib.input.TextInputFormatcreateRecordReaderMethods inherited from class org.apache.hadoop.mapreduce.lib.input.FileInputFormataddInputPath, addInputPathRecursively, addInputPaths, computeSplitSize, getBlockIndex, getFormatMinSplitSize, getInputDirRecursive, getInputPathFilter, getInputPaths, getMaxSplitSize, getMinSplitSize, listStatus, makeSplit, makeSplit, setInputDirRecursive, setInputPathFilter, setInputPaths, setInputPaths, setMaxInputSplitSize, setMinInputSplitSize
- 
Constructor Details- 
CompactionInputFormatprivate CompactionInputFormat()
 
- 
- 
Method Details- 
isSplitableprotected boolean isSplitable(org.apache.hadoop.mapreduce.JobContext context, org.apache.hadoop.fs.Path file) - Overrides:
- isSplitablein class- org.apache.hadoop.mapreduce.lib.input.TextInputFormat
 
- 
getSplitspublic List<org.apache.hadoop.mapreduce.InputSplit> getSplits(org.apache.hadoop.mapreduce.JobContext job) throws IOException Returns a split for each store files directory using the block location of each file as locality reference.- Overrides:
- getSplitsin class- org.apache.hadoop.mapreduce.lib.input.FileInputFormat<org.apache.hadoop.io.LongWritable,- org.apache.hadoop.io.Text> 
- Throws:
- IOException
 
- 
getStoreDirHostsprivate static String[] getStoreDirHosts(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path) throws IOException return the top hosts of the store files, used by the Split- Throws:
- IOException
 
- 
createInputFilepublic static List<org.apache.hadoop.fs.Path> createInputFile(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.FileSystem stagingFs, org.apache.hadoop.fs.Path path, Set<org.apache.hadoop.fs.Path> toCompactDirs) throws IOException Create the input file for the given directories to compact. The file is a TextFile with each line corrisponding to a store files directory to compact.- Throws:
- IOException
 
 
-