Class TestScannerBlockSizeLimits
java.lang.Object
org.apache.hadoop.hbase.regionserver.TestScannerBlockSizeLimits
-
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final HBaseClassTestRuleprivate static final byte[]private static final byte[]private static final byte[]private static final byte[]private static final byte[]private static final byte[][]private static final byte[]private static final byte[]private static final org.apache.hadoop.hbase.TableNameprivate static final HBaseTestingUtil -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionprivate static voidprivate org.apache.hadoop.hbase.client.ScanWe enable cursors and partial results to give us more granularity over counting of results, and we enable STREAM so that no auto switching from pread to stream occurs -- this throws off the rpc counts.static voidsetUp()voidvoidAt the end of the loop in StoreScanner, we do one more check of size limits.voidAfter RegionScannerImpl.populateResults, row filters are run.voidAfter RegionScannerImpl.populateResults, row filters are run.voidTests that we check size limit after filterRowKey.voidTests that when we seek over blocks we dont include them in the block size of the requestvoidSimplest test that ensures we don't count block sizes too much.
-
Field Details
-
CLASS_RULE
-
TEST_UTIL
-
TABLE
-
FAMILY1
-
FAMILY2
-
DATA
-
FAMILIES
-
COLUMN1
-
COLUMN2
-
COLUMN3
-
COLUMN5
-
-
Constructor Details
-
TestScannerBlockSizeLimits
public TestScannerBlockSizeLimits()
-
-
Method Details
-
setUp
- Throws:
Exception
-
setupEach
- Throws:
Exception
-
createTestData
- Throws:
IOExceptionInterruptedException
-
testSingleBlock
Simplest test that ensures we don't count block sizes too much. These 2 requested cells are in the same block, so should be returned in 1 request. If we mis-counted blocks, it'd come in 2 requests.- Throws:
IOException
-
testCheckLimitAfterFilterRowKey
Tests that we check size limit after filterRowKey. When filterRowKey, we call nextRow to skip to next row. This should be efficient in this case, but we still need to check size limits after each row is processed. So in this test, we accumulate some block IO reading row 1, then skip row 2 and should return early at that point. The next rpc call starts with row3 blocks loaded, so can return the whole row in one rpc. If we were not checking size limits, we'd have been able to load an extra row 3 cell into the first rpc and thus split row 3 across multiple Results.- Throws:
IOException
-
testCheckLimitAfterFilteringRowCellsDueToFilterRow
After RegionScannerImpl.populateResults, row filters are run. If row is excluded due to filter.filterRow(), nextRow() is called which might accumulate more block IO. Validates that in this case we still honor block limits.- Throws:
IOException
-
testCheckLimitAfterFilteringCell
At the end of the loop in StoreScanner, we do one more check of size limits. This is to catch block size being exceeded while filtering cells within a store. Test to ensure that we do that, otherwise we'd see no cursors below.- Throws:
IOException
-
testCheckLimitAfterFilteringRowCells
After RegionScannerImpl.populateResults, row filters are run. If row is excluded due to filter.filterRowCells(), we fall through to a final results.isEmpty() check near the end of the method. If results are empty at this point (which they are), nextRow() is called which might accumulate more block IO. Validates that in this case we still honor block limits.- Throws:
IOException
-
testSeekNextUsingHint
Tests that when we seek over blocks we dont include them in the block size of the request- Throws:
IOException
-
getBaseScan
We enable cursors and partial results to give us more granularity over counting of results, and we enable STREAM so that no auto switching from pread to stream occurs -- this throws off the rpc counts.
-