@InterfaceAudience.Public public class ColumnPaginationFilter extends org.apache.hadoop.hbase.filter.FilterBase
Filter.ReturnCode| Constructor and Description |
|---|
ColumnPaginationFilter(int limit,
byte[] columnOffset)
Initializes filter with a string/bookmark based offset and limit.
|
ColumnPaginationFilter(int limit,
int offset)
Initializes filter with an integer offset and limit.
|
| Modifier and Type | Method and Description |
|---|---|
static Filter |
createFilterFromArguments(ArrayList<byte[]> filterArguments) |
boolean |
equals(Object obj) |
Filter.ReturnCode |
filterCell(Cell c)
A way to filter based on the column family, column qualifier and/or the column value.
|
Filter.ReturnCode |
filterKeyValue(Cell c)
Deprecated.
|
boolean |
filterRowKey(Cell cell)
Filters a row based on the row key.
|
byte[] |
getColumnOffset() |
int |
getLimit() |
Cell |
getNextCellHint(Cell cell)
Filters that are not sure which key must be next seeked to, can inherit
this implementation that, by default, returns a null Cell.
|
int |
getOffset() |
int |
hashCode() |
static ColumnPaginationFilter |
parseFrom(byte[] pbBytes) |
void |
reset()
Filters that are purely stateless and do nothing in their reset() methods can inherit
this null/empty implementation.
|
byte[] |
toByteArray()
Return length 0 byte array for Filters that don't require special serialization
|
String |
toString()
Return filter's info for debugging and logging purpose.
|
filterAllRemaining, filterRow, filterRowCells, filterRowKey, hasFilterRow, isFamilyEssential, transformCellisReversed, setReversedpublic ColumnPaginationFilter(int limit, int offset)
limit - Max number of columns to return.offset - The integer offset where to start pagination.public ColumnPaginationFilter(int limit, byte[] columnOffset)
limit - Max number of columns to return.columnOffset - The string/bookmark offset on where to start pagination.public int getLimit()
public int getOffset()
public byte[] getColumnOffset()
public boolean filterRowKey(Cell cell) throws IOException
FilterFilter.filterCell(Cell) below.
If Filter.filterAllRemaining() returns true, then Filter.filterRowKey(Cell) should
also return true.
Concrete implementers can signal a failure condition in their code by throwing an
IOException.filterRowKey in class org.apache.hadoop.hbase.filter.FilterBasecell - The first cell coming in the new rowIOException - in case an I/O or an filter specific failure needs to be signaled.@Deprecated public Filter.ReturnCode filterKeyValue(Cell c)
FilterReturnCode.NEXT_ROW, it should return
ReturnCode.NEXT_ROW until Filter.reset() is called just in case the caller calls
for the next row.
Concrete implementers can signal a failure condition in their code by throwing an
IOException.filterKeyValue in class Filterc - the Cell in questionFilter.ReturnCodepublic Filter.ReturnCode filterCell(Cell c)
FilterReturnCode.NEXT_ROW, it should return
ReturnCode.NEXT_ROW until Filter.reset() is called just in case the caller calls
for the next row.
Concrete implementers can signal a failure condition in their code by throwing an
IOException.filterCell in class Filterc - the Cell in questionFilter.ReturnCodepublic Cell getNextCellHint(Cell cell)
org.apache.hadoop.hbase.filter.FilterBaseIOException.getNextCellHint in class org.apache.hadoop.hbase.filter.FilterBasepublic void reset()
org.apache.hadoop.hbase.filter.FilterBaseIOException.reset in class org.apache.hadoop.hbase.filter.FilterBasepublic static Filter createFilterFromArguments(ArrayList<byte[]> filterArguments)
public byte[] toByteArray()
org.apache.hadoop.hbase.filter.FilterBasetoByteArray in class org.apache.hadoop.hbase.filter.FilterBasepublic static ColumnPaginationFilter parseFrom(byte[] pbBytes) throws org.apache.hadoop.hbase.exceptions.DeserializationException
pbBytes - A pb serialized ColumnPaginationFilter instanceColumnPaginationFilter made from bytesorg.apache.hadoop.hbase.exceptions.DeserializationExceptiontoByteArray()public String toString()
org.apache.hadoop.hbase.filter.FilterBasetoString in class org.apache.hadoop.hbase.filter.FilterBaseCopyright © 2007–2020 The Apache Software Foundation. All rights reserved.