@InterfaceAudience.Public public class MultiTableHFileOutputFormat extends HFileOutputFormat2
HFileOutputFormat2.TableInfo, HFileOutputFormat2.WriterLength| Modifier and Type | Field and Description | 
|---|---|
| private static org.slf4j.Logger | LOG | 
BLOCK_SIZE_FAMILIES_CONF_KEY, blockSizeDetails, BLOOM_PARAM_FAMILIES_CONF_KEY, BLOOM_TYPE_FAMILIES_CONF_KEY, bloomParamDetails, bloomTypeDetails, COMPRESSION_FAMILIES_CONF_KEY, COMPRESSION_OVERRIDE_CONF_KEY, compressionDetails, DATABLOCK_ENCODING_FAMILIES_CONF_KEY, DATABLOCK_ENCODING_OVERRIDE_CONF_KEY, dataBlockEncodingDetails, LOCALITY_SENSITIVE_CONF_KEY, MULTI_TABLE_HFILEOUTPUTFORMAT_CONF_KEY, OUTPUT_TABLE_NAME_CONF_KEY, STORAGE_POLICY_PROPERTY, STORAGE_POLICY_PROPERTY_CF_PREFIX, tableSeparator| Constructor and Description | 
|---|
| MultiTableHFileOutputFormat() | 
| Modifier and Type | Method and Description | 
|---|---|
| static void | configureIncrementalLoad(org.apache.hadoop.mapreduce.Job job,
                        List<HFileOutputFormat2.TableInfo> multiTableDescriptors)Analogous to
  HFileOutputFormat2.configureIncrementalLoad(Job, TableDescriptor, RegionLocator),
 this function will configure the requisite number of reducers to write HFiles for multple
 tables simultaneously | 
| static byte[] | createCompositeKey(byte[] tableName,
                  byte[] suffix)Creates a composite key to use as a mapper output key when using
 MultiTableHFileOutputFormat.configureIncrementaLoad to set up bulk ingest job | 
| static byte[] | createCompositeKey(byte[] tableName,
                  ImmutableBytesWritable suffix)Alternate api which accepts an ImmutableBytesWritable for the suffix | 
| static byte[] | createCompositeKey(String tableName,
                  ImmutableBytesWritable suffix)Alternate api which accepts a String for the tableName and ImmutableBytesWritable for the
 suffix | 
| protected static byte[] | getSuffix(byte[] keyBytes) | 
| protected static byte[] | getTableName(byte[] keyBytes) | 
| private static int | validateCompositeKey(byte[] keyBytes) | 
combineTableNameSuffix, configureIncrementalLoad, configureIncrementalLoad, configureIncrementalLoad, configureIncrementalLoadMap, configurePartitioner, configureStoragePolicy, createFamilyBlockSizeMap, createFamilyBloomParamMap, createFamilyBloomTypeMap, createFamilyCompressionMap, createFamilyDataBlockEncodingMap, createRecordWriter, getRecordWriter, getTableNameSuffixedWithFamily, serializeColumnFamilyAttributecheckOutputSpecs, getCompressOutput, getDefaultWorkFile, getOutputCommitter, getOutputCompressorClass, getOutputName, getOutputPath, getPathForWorkFile, getUniqueFile, getWorkOutputPath, setCompressOutput, setOutputCompressorClass, setOutputName, setOutputPathprivate static final org.slf4j.Logger LOG
public MultiTableHFileOutputFormat()
public static byte[] createCompositeKey(byte[] tableName, byte[] suffix)
tableName - Name of the Table - Eg: TableName.getNameAsString()suffix - Usually represents a rowkey when creating a mapper key or column familypublic static byte[] createCompositeKey(byte[] tableName, ImmutableBytesWritable suffix)
createCompositeKey(byte[], byte[])public static byte[] createCompositeKey(String tableName, ImmutableBytesWritable suffix)
createCompositeKey(byte[], byte[])public static void configureIncrementalLoad(org.apache.hadoop.mapreduce.Job job, List<HFileOutputFormat2.TableInfo> multiTableDescriptors) throws IOException
HFileOutputFormat2.configureIncrementalLoad(Job, TableDescriptor, RegionLocator),
 this function will configure the requisite number of reducers to write HFiles for multple
 tables simultaneouslyjob - See JobmultiTableDescriptors - Table descriptor and region locator pairsIOExceptionprivate static final int validateCompositeKey(byte[] keyBytes)
protected static byte[] getTableName(byte[] keyBytes)
protected static byte[] getSuffix(byte[] keyBytes)
Copyright © 2007–2021 The Apache Software Foundation. All rights reserved.