Package org.apache.hadoop.hbase.wal
Class TestWALSplitToHFile
java.lang.Object
org.apache.hadoop.hbase.wal.TestWALSplitToHFile
-
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final HBaseClassTestRuleprivate org.apache.hadoop.conf.Configurationprivate static final intprivate final org.apache.hadoop.hbase.util.EnvironmentEdgeprivate org.apache.hadoop.fs.FileSystemprivate static final org.slf4j.Loggerprivate org.apache.hadoop.fs.Pathprivate Stringprivate org.apache.hadoop.fs.Pathprivate static final byte[]private org.apache.hadoop.fs.Pathprivate static final byte[]final org.junit.rules.TestName(package private) static final HBaseTestingUtilityprivate static final byte[]private static final byte[]private org.apache.hadoop.hbase.wal.WALFactory -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionprivate org.apache.hadoop.hbase.client.TableDescriptorcreateBasic3FamilyTD(org.apache.hadoop.hbase.TableName tableName) private org.apache.hadoop.hbase.wal.WALcreateWAL(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path hbaseRootDir, String logName) private org.apache.hadoop.hbase.wal.WALcreateWAL(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path hbaseRootDir, String logName) private voiddeleteDir(org.apache.hadoop.fs.Path p) private intgetScannedCount(org.apache.hadoop.hbase.regionserver.RegionScanner scanner) voidsetUp()static voidprivate org.apache.hadoop.hbase.util.Pair<org.apache.hadoop.hbase.client.TableDescriptor,org.apache.hadoop.hbase.client.RegionInfo> voidtearDown()static voidvoidTest that we could recover the data correctly after aborting flush.voidTest that we recover correctly when there is a failure in between the flushes.voidvoidvoidvoidvoidTest writing edits into an HRegion, closing it, splitting logs, opening Region again.private voidwriteCorruptRecoveredHFile(org.apache.hadoop.fs.Path recoveredHFile) private voidwriteData(org.apache.hadoop.hbase.client.TableDescriptor td, org.apache.hadoop.hbase.regionserver.HRegion region)
-
Field Details
-
CLASS_RULE
-
LOG
-
UTIL
-
ee
-
rootDir
-
logName
-
oldLogDir
-
logDir
-
fs
-
conf
-
wals
-
ROW
-
QUALIFIER
-
VALUE1
-
VALUE2
-
countPerFamily
- See Also:
-
TEST_NAME
-
-
Constructor Details
-
TestWALSplitToHFile
public TestWALSplitToHFile()
-
-
Method Details
-
setUpBeforeClass
- Throws:
Exception
-
tearDownAfterClass
- Throws:
Exception
-
setUp
- Throws:
Exception
-
tearDown
- Throws:
Exception
-
deleteDir
- Throws:
IOException
-
createBasic3FamilyTD
private org.apache.hadoop.hbase.client.TableDescriptor createBasic3FamilyTD(org.apache.hadoop.hbase.TableName tableName) throws IOException - Throws:
IOException
-
createWAL
private org.apache.hadoop.hbase.wal.WAL createWAL(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path hbaseRootDir, String logName) throws IOException - Throws:
IOException
-
createWAL
private org.apache.hadoop.hbase.wal.WAL createWAL(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path hbaseRootDir, String logName) throws IOException - Throws:
IOException
-
setupTableAndRegion
private org.apache.hadoop.hbase.util.Pair<org.apache.hadoop.hbase.client.TableDescriptor,org.apache.hadoop.hbase.client.RegionInfo> setupTableAndRegion() throws IOException- Throws:
IOException
-
writeData
private void writeData(org.apache.hadoop.hbase.client.TableDescriptor td, org.apache.hadoop.hbase.regionserver.HRegion region) throws IOException - Throws:
IOException
-
testDifferentRootDirAndWALRootDir
- Throws:
Exception
-
testCorruptRecoveredHFile
- Throws:
Exception
-
testPutWithSameTimestamp
- Throws:
Exception
-
testRecoverSequenceId
- Throws:
Exception
-
testWrittenViaHRegion
public void testWrittenViaHRegion() throws IOException, SecurityException, IllegalArgumentException, InterruptedExceptionTest writing edits into an HRegion, closing it, splitting logs, opening Region again. Verify seqids. -
testAfterPartialFlush
Test that we recover correctly when there is a failure in between the flushes. i.e. Some stores got flushed but others did not. Unfortunately, there is no easy hook to flush at a store level. The way we get around this is by flushing at the region level, and then deleting the recently flushed store file for one of the Stores. This would put us back in the situation where all but that store got flushed and the region died. We restart Region again, and verify that the edits were replayed. -
testAfterAbortingFlush
Test that we could recover the data correctly after aborting flush. In the test, first we abort flush after writing some data, then writing more data and flush again, at last verify the data.- Throws:
IOException
-
getScannedCount
private int getScannedCount(org.apache.hadoop.hbase.regionserver.RegionScanner scanner) throws IOException - Throws:
IOException
-
writeCorruptRecoveredHFile
- Throws:
Exception
-