Package | Description |
---|---|
org.apache.hadoop.hbase | |
org.apache.hadoop.hbase.client |
Provides HBase Client
|
org.apache.hadoop.hbase.io.compress | |
org.apache.hadoop.hbase.io.encoding | |
org.apache.hadoop.hbase.io.hfile |
Provides implementations of
HFile and HFile
BlockCache . |
org.apache.hadoop.hbase.mapreduce |
Provides HBase MapReduce
Input/OutputFormats, a table indexing MapReduce job, and utility methods.
|
org.apache.hadoop.hbase.mob | |
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.regionserver.compactions | |
org.apache.hadoop.hbase.regionserver.storefiletracker | |
org.apache.hadoop.hbase.regionserver.wal | |
org.apache.hadoop.hbase.thrift2 |
Provides an HBase Thrift
service.
|
org.apache.hadoop.hbase.util |
Modifier and Type | Method and Description |
---|---|
Compression.Algorithm |
HColumnDescriptor.getCompactionCompression()
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0
(HBASE-13655). Use
HColumnDescriptor.getCompactionCompressionType() . |
Compression.Algorithm |
HColumnDescriptor.getCompactionCompressionType()
Deprecated.
|
Compression.Algorithm |
HColumnDescriptor.getCompression()
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0
(HBASE-13655). Use
HColumnDescriptor.getCompressionType() . |
Compression.Algorithm |
HColumnDescriptor.getCompressionType()
Deprecated.
|
Compression.Algorithm |
HColumnDescriptor.getMajorCompactionCompressionType()
Deprecated.
|
Compression.Algorithm |
HColumnDescriptor.getMinorCompactionCompressionType()
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
HColumnDescriptor |
HColumnDescriptor.setCompactionCompressionType(Compression.Algorithm value)
Deprecated.
Compression types supported in hbase.
|
HColumnDescriptor |
HColumnDescriptor.setCompressionType(Compression.Algorithm value)
Deprecated.
Compression types supported in hbase.
|
HColumnDescriptor |
HColumnDescriptor.setMajorCompactionCompressionType(Compression.Algorithm value)
Deprecated.
|
HColumnDescriptor |
HColumnDescriptor.setMinorCompactionCompressionType(Compression.Algorithm value)
Deprecated.
|
Modifier and Type | Field and Description |
---|---|
static Compression.Algorithm |
ColumnFamilyDescriptorBuilder.DEFAULT_COMPRESSION
Default compression type.
|
Modifier and Type | Method and Description |
---|---|
Compression.Algorithm |
ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.getCompactionCompressionType() |
Compression.Algorithm |
ColumnFamilyDescriptor.getCompactionCompressionType()
Returns Compression type setting.
|
Compression.Algorithm |
ColumnFamilyDescriptorBuilder.getCompressionType() |
Compression.Algorithm |
ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.getCompressionType() |
Compression.Algorithm |
ColumnFamilyDescriptor.getCompressionType()
Returns Compression type setting.
|
Compression.Algorithm |
ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.getMajorCompactionCompressionType() |
Compression.Algorithm |
ColumnFamilyDescriptor.getMajorCompactionCompressionType()
Returns Compression type setting for major compactions.
|
Compression.Algorithm |
ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor.getMinorCompactionCompressionType() |
Compression.Algorithm |
ColumnFamilyDescriptor.getMinorCompactionCompressionType()
Returns Compression type setting for minor compactions.
|
Modifier and Type | Method and Description |
---|---|
static Compression.Algorithm |
Compression.getCompressionAlgorithmByName(String compressName) |
static Compression.Algorithm |
Compression.Algorithm.valueOf(String name)
Returns the enum constant of this type with the specified name.
|
static Compression.Algorithm[] |
Compression.Algorithm.values()
Returns an array containing the constants of this enum type, in
the order they are declared.
|
Modifier and Type | Method and Description |
---|---|
private static org.apache.hadoop.io.compress.CompressionCodec |
Compression.buildCodec(org.apache.hadoop.conf.Configuration conf,
Compression.Algorithm algo)
Load a codec implementation for an algorithm using the supplied configuration.
|
Modifier and Type | Method and Description |
---|---|
static int |
EncodedDataBlock.getCompressedSize(Compression.Algorithm algo,
org.apache.hadoop.io.compress.Compressor compressor,
byte[] inputBuffer,
int offset,
int length)
Find the size of compressed data assuming that buffer will be compressed using given algorithm.
|
int |
EncodedDataBlock.getEncodedCompressedSize(Compression.Algorithm comprAlgo,
org.apache.hadoop.io.compress.Compressor compressor)
Estimate size after second stage of compression (e.g.
|
Modifier and Type | Field and Description |
---|---|
private Compression.Algorithm |
HFileContext.compressAlgo
Compression algorithm used
|
private Compression.Algorithm |
HFileContextBuilder.compression
Compression algorithm used
|
private Compression.Algorithm |
FixedFileTrailer.compressionCodec
The compression codec used for all blocks.
|
static Compression.Algorithm |
HFile.DEFAULT_COMPRESSION_ALGORITHM
Default compression: none.
|
Modifier and Type | Method and Description |
---|---|
static Compression.Algorithm |
HFileWriterImpl.compressionByName(String algoName) |
Compression.Algorithm |
HFileContext.getCompression() |
Compression.Algorithm |
HFileReaderImpl.getCompressionAlgorithm() |
Compression.Algorithm |
FixedFileTrailer.getCompressionCodec() |
Modifier and Type | Method and Description |
---|---|
void |
FixedFileTrailer.setCompressionCodec(Compression.Algorithm compressionCodec) |
HFileContextBuilder |
HFileContextBuilder.withCompression(Compression.Algorithm compression) |
Constructor and Description |
---|
HFileContext(boolean useHBaseChecksum,
boolean includesMvcc,
boolean includesTags,
Compression.Algorithm compressAlgo,
boolean compressTags,
ChecksumType checksumType,
int bytesPerChecksum,
int blockSize,
DataBlockEncoding encoding,
Encryption.Context cryptoContext,
long fileCreateTime,
String hfileName,
byte[] columnFamily,
byte[] tableName,
CellComparator cellComparator) |
Modifier and Type | Method and Description |
---|---|
(package private) static Map<byte[],Compression.Algorithm> |
HFileOutputFormat2.createFamilyCompressionMap(org.apache.hadoop.conf.Configuration conf)
Runs inside the task to deserialize column family to compression algorithm map from the
configuration.
|
Modifier and Type | Method and Description |
---|---|
static StoreFileWriter |
MobUtils.createWriter(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
ColumnFamilyDescriptor family,
MobFileName mobFileName,
org.apache.hadoop.fs.Path basePath,
long maxKeyCount,
Compression.Algorithm compression,
CacheConfig cacheConfig,
Encryption.Context cryptoContext,
boolean isCompaction)
Creates a writer for the mob file in temp directory.
|
static StoreFileWriter |
MobUtils.createWriter(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
ColumnFamilyDescriptor family,
org.apache.hadoop.fs.Path path,
long maxKeyCount,
Compression.Algorithm compression,
CacheConfig cacheConfig,
Encryption.Context cryptoContext,
ChecksumType checksumType,
int bytesPerChecksum,
int blocksize,
BloomType bloomType,
boolean isCompaction)
Creates a writer for the mob file in temp directory.
|
static StoreFileWriter |
MobUtils.createWriter(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
ColumnFamilyDescriptor family,
org.apache.hadoop.fs.Path path,
long maxKeyCount,
Compression.Algorithm compression,
CacheConfig cacheConfig,
Encryption.Context cryptoContext,
ChecksumType checksumType,
int bytesPerChecksum,
int blocksize,
BloomType bloomType,
boolean isCompaction,
Consumer<org.apache.hadoop.fs.Path> writerCreationTracker)
Creates a writer for the mob file in temp directory.
|
static StoreFileWriter |
MobUtils.createWriter(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
ColumnFamilyDescriptor family,
String date,
org.apache.hadoop.fs.Path basePath,
long maxKeyCount,
Compression.Algorithm compression,
String startKey,
CacheConfig cacheConfig,
Encryption.Context cryptoContext,
boolean isCompaction,
String regionName)
Creates a writer for the mob file in temp directory.
|
Modifier and Type | Field and Description |
---|---|
private Compression.Algorithm |
CreateStoreFileWriterParams.compression |
Modifier and Type | Method and Description |
---|---|
Compression.Algorithm |
CreateStoreFileWriterParams.compression() |
Modifier and Type | Method and Description |
---|---|
CreateStoreFileWriterParams |
CreateStoreFileWriterParams.compression(Compression.Algorithm compression)
Set the compression algorithm to use
|
StoreFileWriter |
HMobStore.createWriter(Date date,
long maxKeyCount,
Compression.Algorithm compression,
byte[] startKey,
boolean isCompaction,
Consumer<org.apache.hadoop.fs.Path> writerCreationTracker)
Creates the writer for the mob file in the mob family directory.
|
StoreFileWriter |
HMobStore.createWriterInTmp(Date date,
long maxKeyCount,
Compression.Algorithm compression,
byte[] startKey,
boolean isCompaction)
Creates the writer for the mob file in temp directory.
|
StoreFileWriter |
HMobStore.createWriterInTmp(MobFileName mobFileName,
org.apache.hadoop.fs.Path basePath,
long maxKeyCount,
Compression.Algorithm compression,
boolean isCompaction,
Consumer<org.apache.hadoop.fs.Path> writerCreationTracker)
Creates the writer for the mob file in temp directory.
|
StoreFileWriter |
HMobStore.createWriterInTmp(String date,
org.apache.hadoop.fs.Path basePath,
long maxKeyCount,
Compression.Algorithm compression,
byte[] startKey,
boolean isCompaction,
Consumer<org.apache.hadoop.fs.Path> writerCreationTracker)
Creates the writer for the mob file in temp directory.
|
Modifier and Type | Field and Description |
---|---|
protected Compression.Algorithm |
Compactor.majorCompactionCompression |
protected Compression.Algorithm |
Compactor.minorCompactionCompression |
Modifier and Type | Method and Description |
---|---|
private HFileContext |
StoreFileTrackerBase.createFileContext(Compression.Algorithm compression,
boolean includeMVCCReadpoint,
boolean includesTag,
Encryption.Context encryptionContext) |
Modifier and Type | Field and Description |
---|---|
private Compression.Algorithm |
CompressionContext.ValueCompressor.algorithm |
protected Compression.Algorithm |
ProtobufLogReader.valueCompressionType |
Modifier and Type | Method and Description |
---|---|
Compression.Algorithm |
CompressionContext.ValueCompressor.getAlgorithm() |
protected abstract Compression.Algorithm |
ReaderBase.getValueCompressionAlgorithm()
Returns Value compression algorithm for this log.
|
protected Compression.Algorithm |
ProtobufLogReader.getValueCompressionAlgorithm() |
static Compression.Algorithm |
CompressionContext.getValueCompressionAlgorithm(org.apache.hadoop.conf.Configuration conf) |
Constructor and Description |
---|
CompressionContext(Class<? extends Dictionary> dictType,
boolean recoveredEdits,
boolean hasTagCompression,
boolean hasValueCompression,
Compression.Algorithm valueCompressionType) |
ValueCompressor(Compression.Algorithm algorithm) |
Modifier and Type | Method and Description |
---|---|
static Compression.Algorithm |
ThriftUtilities.compressionAlgorithmFromThrift(org.apache.hadoop.hbase.thrift2.generated.TCompressionAlgorithm in) |
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.hbase.thrift2.generated.TCompressionAlgorithm |
ThriftUtilities.compressionAlgorithmFromHBase(Compression.Algorithm in) |
Modifier and Type | Method and Description |
---|---|
static void |
CompressionTest.testCompression(Compression.Algorithm algo) |
Copyright © 2007–2020 The Apache Software Foundation. All rights reserved.