static interface HFileBlock.FSReader
Modifier and Type | Method and Description |
---|---|
HFileBlock.BlockIterator |
blockRange(long startOffset,
long endOffset)
Creates a block iterator over the given portion of the
HFile . |
void |
closeStreams()
Closes the backing streams
|
HFileBlockDecodingContext |
getBlockDecodingContext()
Get a decoder for
BlockType.ENCODED_DATA blocks from this file. |
HFileBlockDecodingContext |
getDefaultBlockDecodingContext()
Get the default decoder for blocks from this file.
|
HFileBlock |
readBlockData(long offset,
long onDiskSize,
boolean pread,
boolean updateMetrics,
boolean intoHeap)
Reads the block at the given offset in the file with the given on-disk size and uncompressed
size.
|
void |
setDataBlockEncoder(HFileDataBlockEncoder encoder,
org.apache.hadoop.conf.Configuration conf) |
void |
setIncludesMemStoreTS(boolean includesMemstoreTS) |
void |
unbufferStream()
To close the stream's socket.
|
HFileBlock readBlockData(long offset, long onDiskSize, boolean pread, boolean updateMetrics, boolean intoHeap) throws IOException
offset
- of the file to readonDiskSize
- the on-disk size of the entire block, including all applicable headers,
or -1 if unknownpread
- true to use pread, otherwise use the stream read.updateMetrics
- update the metrics or not.intoHeap
- allocate the block's ByteBuff by ByteBuffAllocator
or JVM heap.
For LRUBlockCache, we must ensure that the block to cache is an heap
one, because the memory occupation is based on heap now, also for
CombinedBlockCache
, we use the heap LRUBlockCache as L1 cache to
cache small blocks such as IndexBlock or MetaBlock for faster access. So
introduce an flag here to decide whether allocate from JVM heap or not
so that we can avoid an extra off-heap to heap memory copy when using
LRUBlockCache. For most cases, we known what's the expected block type
we'll read, while for some special case (Example:
HFileReaderImpl#readNextDataBlock()), we cannot pre-decide what's the
expected block type, then we can only allocate block's ByteBuff from
ByteBuffAllocator
firstly, and then when caching it in
LruBlockCache
we'll check whether the ByteBuff is from heap or
not, if not then we'll clone it to an heap one and cache it.IOException
HFileBlock.BlockIterator blockRange(long startOffset, long endOffset)
HFile
. The iterator returns
blocks starting with offset such that offset <= startOffset < endOffset. Returned
blocks are always unpacked. Used when no hfile index available; e.g. reading in the hfile
index blocks themselves on file open.startOffset
- the offset of the block to start iteration withendOffset
- the offset to end iteration at (exclusive)void closeStreams() throws IOException
IOException
HFileBlockDecodingContext getBlockDecodingContext()
BlockType.ENCODED_DATA
blocks from this file.HFileBlockDecodingContext getDefaultBlockDecodingContext()
void setIncludesMemStoreTS(boolean includesMemstoreTS)
void setDataBlockEncoder(HFileDataBlockEncoder encoder, org.apache.hadoop.conf.Configuration conf)
void unbufferStream()
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.