@InterfaceAudience.Private public class DefaultCompactor extends Compactor
compact(CompactionRequest, CompactionThroughputController, User)
Compactor.CellSink, Compactor.FileDetails
compactionCompression, conf, keepSeqIdPeriod, progress, store
Constructor and Description |
---|
DefaultCompactor(org.apache.hadoop.conf.Configuration conf,
Store store) |
Modifier and Type | Method and Description |
---|---|
List<org.apache.hadoop.fs.Path> |
compact(CompactionRequest request,
CompactionThroughputController throughputController,
User user)
Do a minor/major compaction on an explicit set of storefiles from a Store.
|
List<org.apache.hadoop.fs.Path> |
compactForTesting(Collection<StoreFile> filesToCompact,
boolean isMajor)
Compact a list of files for testing.
|
createFileScanners, createScanner, createScanner, getFileDetails, getProgress, getSmallestReadPoint, performCompaction, postCreateCoprocScanner, preCreateCoprocScanner, preCreateCoprocScanner
public DefaultCompactor(org.apache.hadoop.conf.Configuration conf, Store store)
public List<org.apache.hadoop.fs.Path> compact(CompactionRequest request, CompactionThroughputController throughputController, User user) throws IOException
IOException
public List<org.apache.hadoop.fs.Path> compactForTesting(Collection<StoreFile> filesToCompact, boolean isMajor) throws IOException
CompactionRequest
to pass to
compact(CompactionRequest, CompactionThroughputController, User)
;filesToCompact
- the files to compact. These are used as the compactionSelection for
the generated CompactionRequest
.isMajor
- true to major compact (prune all deletes, max versions, etc)IOException
Copyright © 2007-2016 The Apache Software Foundation. All Rights Reserved.