Class TestZooKeeperTableArchiveClient
java.lang.Object
org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient
Spin up a small cluster and check that the hfiles of region are properly long-term archived as
specified via the
ZKTableArchiveClient.-
Nested Class Summary
Nested Classes -
Field Summary
FieldsModifier and TypeFieldDescriptionprivate static org.apache.hadoop.hbase.backup.example.ZKTableArchiveClientstatic final HBaseClassTestRuleprivate static org.apache.hadoop.hbase.client.Connectionprivate static final org.slf4j.Loggerprivate static org.apache.hadoop.hbase.master.cleaner.DirScanPoolprivate static org.apache.hadoop.hbase.regionserver.RegionServerServicesprivate static final Stringprivate static final byte[]private static final byte[]private final List<org.apache.hadoop.fs.Path>private static final HBaseTestingUtil -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionstatic voidprivate voidprivate voidcreateHFileInRegion(org.apache.hadoop.hbase.regionserver.HRegion region, byte[] columnFamily) Create a new hfile in the passed regionprivate List<org.apache.hadoop.fs.Path>getAllFiles(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir) Get all the files (non-directory entries) in the file system under the passed directoryprivate org.apache.hadoop.fs.Pathprivate org.apache.hadoop.fs.PathgetTableDir(String tableName) private voidloadFlushAndCompact(org.apache.hadoop.hbase.regionserver.HRegion region, byte[] family) private voidrunCleaner(org.apache.hadoop.hbase.master.cleaner.HFileCleaner cleaner, CountDownLatch finished, org.apache.hadoop.hbase.Stoppable stop) private org.apache.hadoop.hbase.master.cleaner.HFileCleanersetupAndCreateCleaner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path archiveDir, org.apache.hadoop.hbase.Stoppable stop) private CountDownLatchsetupCleanerWatching(org.apache.hadoop.hbase.backup.example.LongTermArchivingHFileCleaner cleaner, List<org.apache.hadoop.hbase.master.cleaner.BaseHFileCleanerDelegate> cleaners, int expected) Spy on theLongTermArchivingHFileCleanerto ensure we can catch when the cleaner has seen all the filesstatic voidSetup the config for the clusterprivate static voidsetupConf(org.apache.hadoop.conf.Configuration conf) voidtearDown()voidTest turning on/off archivingvoidvoidTest archiving/cleaning across multiple tables, where some are retained, and others aren'tprivate List<org.apache.hadoop.hbase.master.cleaner.BaseHFileCleanerDelegate>turnOnArchiving(String tableName, org.apache.hadoop.hbase.master.cleaner.HFileCleaner cleaner) Start archiving table for given hfile cleaner
-
Field Details
-
CLASS_RULE
-
LOG
-
UTIL
-
STRING_TABLE_NAME
- See Also:
-
TEST_FAM
-
TABLE_NAME
-
archivingClient
-
toCleanup
-
CONNECTION
-
rss
-
POOL
-
-
Constructor Details
-
TestZooKeeperTableArchiveClient
public TestZooKeeperTableArchiveClient()
-
-
Method Details
-
setupCluster
Setup the config for the cluster- Throws:
Exception
-
setupConf
-
tearDown
- Throws:
Exception
-
cleanupTest
- Throws:
Exception
-
testArchivingEnableDisable
Test turning on/off archiving- Throws:
Exception
-
testArchivingOnSingleTable
- Throws:
Exception
-
testMultipleTables
Test archiving/cleaning across multiple tables, where some are retained, and others aren't- Throws:
Exception- on failure
-
createArchiveDirectory
- Throws:
IOException
-
getArchiveDir
- Throws:
IOException
-
getTableDir
- Throws:
IOException
-
setupAndCreateCleaner
private org.apache.hadoop.hbase.master.cleaner.HFileCleaner setupAndCreateCleaner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path archiveDir, org.apache.hadoop.hbase.Stoppable stop) -
turnOnArchiving
private List<org.apache.hadoop.hbase.master.cleaner.BaseHFileCleanerDelegate> turnOnArchiving(String tableName, org.apache.hadoop.hbase.master.cleaner.HFileCleaner cleaner) throws IOException, org.apache.zookeeper.KeeperException Start archiving table for given hfile cleaner- Parameters:
tableName- table to archivecleaner- cleaner to check to make sure change propagated- Returns:
- underlying
LongTermArchivingHFileCleanerthat is managing archiving - Throws:
IOException- on failureorg.apache.zookeeper.KeeperException- on failure
-
setupCleanerWatching
private CountDownLatch setupCleanerWatching(org.apache.hadoop.hbase.backup.example.LongTermArchivingHFileCleaner cleaner, List<org.apache.hadoop.hbase.master.cleaner.BaseHFileCleanerDelegate> cleaners, int expected) Spy on theLongTermArchivingHFileCleanerto ensure we can catch when the cleaner has seen all the files- Returns:
- a
CountDownLatchto wait on that releases when the cleaner has been called at least the expected number of times.
-
getAllFiles
private List<org.apache.hadoop.fs.Path> getAllFiles(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir) throws IOException Get all the files (non-directory entries) in the file system under the passed directory- Parameters:
dir- directory to investigate- Returns:
- all files under the directory
- Throws:
IOException
-
loadFlushAndCompact
private void loadFlushAndCompact(org.apache.hadoop.hbase.regionserver.HRegion region, byte[] family) throws IOException - Throws:
IOException
-
createHFileInRegion
private void createHFileInRegion(org.apache.hadoop.hbase.regionserver.HRegion region, byte[] columnFamily) throws IOException Create a new hfile in the passed region- Parameters:
region- region to operate oncolumnFamily- family for which to add data- Throws:
IOException- if doing the put or flush fails
-
runCleaner
private void runCleaner(org.apache.hadoop.hbase.master.cleaner.HFileCleaner cleaner, CountDownLatch finished, org.apache.hadoop.hbase.Stoppable stop) throws InterruptedException - Parameters:
cleaner- the cleaner to use- Throws:
InterruptedException
-