@InterfaceAudience.Private public class HFileBlock extends Object implements Cacheable
HFile version 2 blocks to HFiles and via Cacheable Interface to caches.
 Version 2 was introduced in hbase-0.92.0. No longer has support for version 1 blocks since
 hbase-1.3.0.
 Version 1 was the original file block. Version 2 was introduced when we changed the hbase file format to support multi-level block indexes and compound bloom filters (HBASE-3857).
BlockType (8 bytes):
 e.g. DATABLK*
 HFile. If compression is NONE, this is
 just raw, serialized Cells.
 Cacheable.serialize(ByteBuffer, boolean) and
 Cacheable.getDeserializer().
 TODO: Should we cache the checksums? Down in Writer#getBlockForCaching(CacheConfig) where we make a block to cache-on-write, there is an attempt at turning off checksums. This is not the only place we get blocks to cache. We also will cache the raw return from an hdfs read. In this case, the checksums may be present. If the cache is backed by something that doesn't do ECC, say an SSD, we might want to preserve checksums. For now this is open question.
TODO: Over in BucketCache, we save a block allocation by doing a custom serialization. Be sure to change it if serialization changes in here. Could we add a method here that takes an IOEngine and that then serializes to it rather than expose our internals over in BucketCache? IOEngine is in the bucket subpackage. Pull it up? Then this class knows about bucketcache. Ugh.
| Modifier and Type | Class and Description | 
|---|---|
static class  | 
HFileBlock.Writer
Unified version 2  
HFile block writer. | 
| Modifier and Type | Field and Description | 
|---|---|
static int | 
BYTE_BUFFER_HEAP_SIZE  | 
static boolean | 
DONT_FILL_HEADER  | 
static boolean | 
FILL_HEADER  | 
| Constructor and Description | 
|---|
HFileBlock(BlockType blockType,
          int onDiskSizeWithoutHeader,
          int uncompressedSizeWithoutHeader,
          long prevBlockOffset,
          ByteBuffer b,
          boolean fillHeader,
          long offset,
          int nextBlockOnDiskSize,
          int onDiskDataSizeWithHeader,
          HFileContext fileContext)
Creates a new  
HFile block from the given fields. | 
| Modifier and Type | Method and Description | 
|---|---|
boolean | 
equals(Object comparison)  | 
BlockType | 
getBlockType()  | 
ByteBuffer | 
getBufferReadOnly()
Returns a read-only duplicate of the buffer this block stores internally ready to be read. 
 | 
ByteBuffer | 
getBufferWithoutHeader()
Returns a buffer that does not include the header or checksum. 
 | 
DataInputStream | 
getByteStream()  | 
DataBlockEncoding | 
getDataBlockEncoding()  | 
short | 
getDataBlockEncodingId()  | 
CacheableDeserializer<Cacheable> | 
getDeserializer()
Returns CacheableDeserializer instance which reconstructs original object from ByteBuffer. 
 | 
ByteBuffer | 
getMetaData()
For use by bucketcache. 
 | 
int | 
getNextBlockOnDiskSize()  | 
int | 
getOnDiskSizeWithHeader()  | 
int | 
getSerializedLength()
Returns the length of the ByteBuffer required to serialized the object. 
 | 
int | 
getUncompressedSizeWithoutHeader()  | 
int | 
hashCode()  | 
int | 
headerSize()
Returns the size of this block header. 
 | 
static int | 
headerSize(boolean usesHBaseChecksum)
Maps a minor version to the size of the header. 
 | 
long | 
heapSize()  | 
boolean | 
isUnpacked()
Return true when this block's buffer has been unpacked, false otherwise. 
 | 
void | 
sanityCheckUncompressedSize()
An additional sanity-check in case no compression or encryption is being used. 
 | 
void | 
serialize(ByteBuffer destination,
         boolean includeNextBlockOnDiskSize)
Serializes its data into destination. 
 | 
String | 
toString()  | 
public static final boolean FILL_HEADER
public static final boolean DONT_FILL_HEADER
public static final int BYTE_BUFFER_HEAP_SIZE
public HFileBlock(BlockType blockType, int onDiskSizeWithoutHeader, int uncompressedSizeWithoutHeader, long prevBlockOffset, ByteBuffer b, boolean fillHeader, long offset, int nextBlockOnDiskSize, int onDiskDataSizeWithHeader, HFileContext fileContext)
HFile block from the given fields. This constructor
 is used only while writing blocks and caching,
 and is sitting in a byte buffer and we want to stuff the block into cache.
 See HFileBlock.Writer.getBlockForCaching(CacheConfig).
 TODO: The caller presumes no checksumming required of this block instance since going into cache; checksum already verified on underlying block data pulled in from filesystem. Is that correct? What if cache is SSD?
blockType - the type of this block, see BlockTypeonDiskSizeWithoutHeader - see onDiskSizeWithoutHeaderuncompressedSizeWithoutHeader - see uncompressedSizeWithoutHeaderprevBlockOffset - see prevBlockOffsetb - block header (HConstants.HFILEBLOCK_HEADER_SIZE bytes)fillHeader - when true, write the first 4 header fields into passed buffer.offset - the file offset the block was read fromonDiskDataSizeWithHeader - see onDiskDataSizeWithHeaderfileContext - HFile meta datapublic int getNextBlockOnDiskSize()
public BlockType getBlockType()
getBlockType in interface Cacheablepublic short getDataBlockEncodingId()
public int getOnDiskSizeWithHeader()
public int getUncompressedSizeWithoutHeader()
public ByteBuffer getBufferWithoutHeader()
public ByteBuffer getBufferReadOnly()
CompoundBloomFilter
 to avoid object creation on every Bloom
 filter lookup, but has to be used with caution. Buffer holds header, block content,
 and any follow-on checksums if present.public boolean isUnpacked()
public void sanityCheckUncompressedSize()
                                 throws IOException
IOExceptionpublic DataInputStream getByteStream()
public long heapSize()
public int getSerializedLength()
CacheablegetSerializedLength in interface Cacheablepublic void serialize(ByteBuffer destination, boolean includeNextBlockOnDiskSize)
Cacheablepublic ByteBuffer getMetaData()
public CacheableDeserializer<Cacheable> getDeserializer()
CacheablegetDeserializer in interface Cacheablepublic DataBlockEncoding getDataBlockEncoding()
public int headerSize()
public static int headerSize(boolean usesHBaseChecksum)
Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.