Class TestScannerBlockSizeLimits

java.lang.Object
org.apache.hadoop.hbase.regionserver.TestScannerBlockSizeLimits

public class TestScannerBlockSizeLimits extends Object
  • Field Details

  • Constructor Details

  • Method Details

    • setUp

      public static void setUp() throws Exception
      Throws:
      Exception
    • setupEach

      public void setupEach() throws Exception
      Throws:
      Exception
    • createTestData

      private static void createTestData() throws IOException, InterruptedException
      Throws:
      IOException
      InterruptedException
    • testSingleBlock

      public void testSingleBlock() throws IOException
      Simplest test that ensures we don't count block sizes too much. These 2 requested cells are in the same block, so should be returned in 1 request. If we mis-counted blocks, it'd come in 2 requests.
      Throws:
      IOException
    • testCheckLimitAfterFilterRowKey

      Tests that we check size limit after filterRowKey. When filterRowKey, we call nextRow to skip to next row. This should be efficient in this case, but we still need to check size limits after each row is processed. So in this test, we accumulate some block IO reading row 1, then skip row 2 and should return early at that point. The next rpc call starts with row3 blocks loaded, so can return the whole row in one rpc. If we were not checking size limits, we'd have been able to load an extra row 3 cell into the first rpc and thus split row 3 across multiple Results.
      Throws:
      IOException
    • testCheckLimitAfterFilteringRowCellsDueToFilterRow

      After RegionScannerImpl.populateResults, row filters are run. If row is excluded due to filter.filterRow(), nextRow() is called which might accumulate more block IO. Validates that in this case we still honor block limits.
      Throws:
      IOException
    • testCheckLimitAfterFilteringCell

      At the end of the loop in StoreScanner, we do one more check of size limits. This is to catch block size being exceeded while filtering cells within a store. Test to ensure that we do that, otherwise we'd see no cursors below.
      Throws:
      IOException
    • testCheckLimitAfterFilteringRowCells

      After RegionScannerImpl.populateResults, row filters are run. If row is excluded due to filter.filterRowCells(), we fall through to a final results.isEmpty() check near the end of the method. If results are empty at this point (which they are), nextRow() is called which might accumulate more block IO. Validates that in this case we still honor block limits.
      Throws:
      IOException
    • testSeekNextUsingHint

      public void testSeekNextUsingHint() throws IOException
      Tests that when we seek over blocks we dont include them in the block size of the request
      Throws:
      IOException
    • getBaseScan

      private org.apache.hadoop.hbase.client.Scan getBaseScan()
      We enable cursors and partial results to give us more granularity over counting of results, and we enable STREAM so that no auto switching from pread to stream occurs -- this throws off the rpc counts.