Modifier and Type | Field and Description |
---|---|
static String |
BULK_OUTPUT_CONF_KEY |
static String |
CF_RENAME_PROP |
static String |
FILTER_ARGS_CONF_KEY |
static String |
FILTER_CLASS_CONF_KEY |
static String |
HAS_LARGE_RESULT |
static String |
TABLE_NAME |
static String |
WAL_DURABILITY |
Constructor and Description |
---|
Import() |
Modifier and Type | Method and Description |
---|---|
static void |
addFilterAndArguments(org.apache.hadoop.conf.Configuration conf,
Class<? extends Filter> clazz,
List<String> filterArgs)
Add a Filter to be instantiated on import
|
static void |
configureCfRenaming(org.apache.hadoop.conf.Configuration conf,
Map<String,String> renameMap)
Sets a configuration property with key
CF_RENAME_PROP in conf that tells
the mapper how to rename column families. |
static org.apache.hadoop.mapreduce.Job |
createSubmittableJob(org.apache.hadoop.conf.Configuration conf,
String[] args)
Sets up the actual job.
|
static Cell |
filterKv(Filter filter,
Cell c)
Attempt to filter out the keyvalue
|
static void |
flushRegionsIfNecessary(org.apache.hadoop.conf.Configuration conf)
If the durability is set to
Durability.SKIP_WAL and the data is imported to hbase, we
need to flush all the regions of the table as the data is held in memory and is also not
present in the Write Ahead Log to replay in scenarios of a crash. |
static Filter |
instantiateFilter(org.apache.hadoop.conf.Configuration conf)
Create a
Filter to apply to all incoming keys (KeyValues ) to
optionally not include in the job output |
static void |
main(String[] args)
Main entry point.
|
int |
run(String[] args) |
public static final String CF_RENAME_PROP
public static final String BULK_OUTPUT_CONF_KEY
public static final String FILTER_CLASS_CONF_KEY
public static final String FILTER_ARGS_CONF_KEY
public static final String TABLE_NAME
public static final String WAL_DURABILITY
public static final String HAS_LARGE_RESULT
public Import()
public static Filter instantiateFilter(org.apache.hadoop.conf.Configuration conf)
Filter
to apply to all incoming keys (KeyValues
) to
optionally not include in the job outputconf
- Configuration
from which to load the filterIllegalArgumentException
- if the filter is misconfiguredpublic static Cell filterKv(Filter filter, Cell c) throws IOException
c
- Cell
on which to apply the filterCell
IOException
public static void configureCfRenaming(org.apache.hadoop.conf.Configuration conf, Map<String,String> renameMap)
Sets a configuration property with key CF_RENAME_PROP
in conf that tells
the mapper how to rename column families.
Alternately, instead of calling this function, you could set the configuration key
CF_RENAME_PROP
yourself. The value should look like
srcCf1:destCf1,srcCf2:destCf2,..... This would have the same effect on the mapper behavior.
conf
- the Configuration in which the CF_RENAME_PROP
key will be
setrenameMap
- a mapping from source CF names to destination CF namespublic static void addFilterAndArguments(org.apache.hadoop.conf.Configuration conf, Class<? extends Filter> clazz, List<String> filterArgs) throws IOException
conf
- Configuration to update (will be passed to the job)clazz
- Filter
subclass to instantiate on the server.filterArgs
- List of arguments to pass to the filter on instantiationIOException
public static org.apache.hadoop.mapreduce.Job createSubmittableJob(org.apache.hadoop.conf.Configuration conf, String[] args) throws IOException
conf
- The current configuration.args
- The command line parameters.IOException
- When setting up the job fails.public static void flushRegionsIfNecessary(org.apache.hadoop.conf.Configuration conf) throws IOException, InterruptedException
Durability.SKIP_WAL
and the data is imported to hbase, we
need to flush all the regions of the table as the data is held in memory and is also not
present in the Write Ahead Log to replay in scenarios of a crash. This method flushes all the
regions of the table in the scenarios of import data to hbase with Durability.SKIP_WAL
IOException
InterruptedException
public int run(String[] args) throws Exception
run
in interface org.apache.hadoop.util.Tool
Exception
Copyright © 2007–2021 The Apache Software Foundation. All rights reserved.