Uses of Class
org.apache.hadoop.hbase.client.Scan

Packages that use org.apache.hadoop.hbase.client.Scan
Package
Description
 
Provides HBase Client
 
Table of Contents
 
Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility methods.
Provides HBase MapReduce Input/OutputFormats, a table indexing MapReduce job, and utility methods.
 
 
 
 
 
  • Uses of org.apache.hadoop.hbase.client.Scan in org.apache.hadoop.hbase

    Methods in org.apache.hadoop.hbase that return org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    static org.apache.hadoop.hbase.client.Scan
    MetaTableAccessor.getScanForTableName(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.TableName tableName)
    This method creates a Scan object that will only scan catalog rows that belong to the specified table.
    Methods in org.apache.hadoop.hbase with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    static int
    HBaseTestingUtility.countRows(org.apache.hadoop.hbase.client.Table table, org.apache.hadoop.hbase.client.Scan scan)
    Deprecated.
     
    int
    HBaseTestingUtility.countRows(org.apache.hadoop.hbase.regionserver.Region region, org.apache.hadoop.hbase.client.Scan scan)
    Deprecated.
     
  • Uses of org.apache.hadoop.hbase.client.Scan in org.apache.hadoop.hbase.client

    Subclasses of org.apache.hadoop.hbase.client.Scan in in org.apache.hadoop.hbase.client
    Modifier and Type
    Class
    Description
    final class 
    org.apache.hadoop.hbase.client.ImmutableScan
    Immutable version of Scan
    Methods in org.apache.hadoop.hbase.client that return org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.addColumn(byte[] family, byte[] qualifier)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.addColumn(byte[] family, byte[] qualifier)
    Get the column from the specified family with the specified qualifier.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.addFamily(byte[] family)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.addFamily(byte[] family)
    Get all columns from the specified family.
    static org.apache.hadoop.hbase.client.Scan
    Scan.createScanFromCursor(org.apache.hadoop.hbase.client.Cursor cursor)
    Create a new Scan with a cursor.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.readAllVersions()
     
    org.apache.hadoop.hbase.client.Scan
    Scan.readAllVersions()
    Get all available versions.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.readVersions(int versions)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.readVersions(int versions)
    Get up to the specified number of versions of each column.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setACL(String user, org.apache.hadoop.hbase.security.access.Permission perms)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setACL(Map<String,org.apache.hadoop.hbase.security.access.Permission> perms)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setACL(String user, org.apache.hadoop.hbase.security.access.Permission perms)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setACL(Map<String,org.apache.hadoop.hbase.security.access.Permission> perms)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setAllowPartialResults(boolean allowPartialResults)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setAllowPartialResults(boolean allowPartialResults)
    Setting whether the caller wants to see the partial results when server returns less-than-expected cells.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setAsyncPrefetch(boolean asyncPrefetch)
    Deprecated.
    org.apache.hadoop.hbase.client.Scan
    Scan.setAsyncPrefetch(boolean asyncPrefetch)
    Deprecated.
    Since 3.0.0, will be removed in 4.0.0.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setAttribute(String name, byte[] value)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setAttribute(String name, byte[] value)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setAuthorizations(org.apache.hadoop.hbase.security.visibility.Authorizations authorizations)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setAuthorizations(org.apache.hadoop.hbase.security.visibility.Authorizations authorizations)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setBatch(int batch)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setBatch(int batch)
    Set the maximum number of cells to return for each call to next().
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setCacheBlocks(boolean cacheBlocks)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setCacheBlocks(boolean cacheBlocks)
    Set whether blocks should be cached for this Scan.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setCaching(int caching)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setCaching(int caching)
    Set the number of rows for caching that will be passed to scanners.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setColumnFamilyTimeRange(byte[] cf, long minStamp, long maxStamp)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setColumnFamilyTimeRange(byte[] cf, long minStamp, long maxStamp)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setConsistency(org.apache.hadoop.hbase.client.Consistency consistency)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setConsistency(org.apache.hadoop.hbase.client.Consistency consistency)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setFamilyMap(Map<byte[],NavigableSet<byte[]>> familyMap)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setFamilyMap(Map<byte[],NavigableSet<byte[]>> familyMap)
    Setting the familyMap
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setFilter(org.apache.hadoop.hbase.filter.Filter filter)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setFilter(org.apache.hadoop.hbase.filter.Filter filter)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setId(String id)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setId(String id)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setIsolationLevel(org.apache.hadoop.hbase.client.IsolationLevel level)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setIsolationLevel(org.apache.hadoop.hbase.client.IsolationLevel level)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setLimit(int limit)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setLimit(int limit)
    Set the limit of rows for this scan.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setLoadColumnFamiliesOnDemand(boolean value)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setLoadColumnFamiliesOnDemand(boolean value)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setMaxResultSize(long maxResultSize)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setMaxResultSize(long maxResultSize)
    Set the maximum result size.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setMaxResultsPerColumnFamily(int limit)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setMaxResultsPerColumnFamily(int limit)
    Set the maximum number of values to return per row per Column Family
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setNeedCursorResult(boolean needCursorResult)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setNeedCursorResult(boolean needCursorResult)
    When the server is slow or we scan a table with many deleted data or we use a sparse filter, the server will response heartbeat to prevent timeout.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setOneRowLimit()
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setOneRowLimit()
    Call this when you only want to get one row.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setPriority(int priority)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setPriority(int priority)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setRaw(boolean raw)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setRaw(boolean raw)
    Enable/disable "raw" mode for this scan.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setReadType(org.apache.hadoop.hbase.client.Scan.ReadType readType)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setReadType(org.apache.hadoop.hbase.client.Scan.ReadType readType)
    Set the read type for this scan.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setReplicaId(int id)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setReplicaId(int Id)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setReversed(boolean reversed)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setReversed(boolean reversed)
    Set whether this scan is a reversed one
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setRowOffsetPerColumnFamily(int offset)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setRowOffsetPerColumnFamily(int offset)
    Set offset for the row per Column Family.
    org.apache.hadoop.hbase.client.Scan
    Scan.setRowPrefixFilter(byte[] rowPrefix)
    Deprecated.
    since 2.5.0, will be removed in 4.0.0.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setScanMetricsEnabled(boolean enabled)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setScanMetricsEnabled(boolean enabled)
    Enable collection of ScanMetrics.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setStartStopRowForPrefixScan(byte[] rowPrefix)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setStartStopRowForPrefixScan(byte[] rowPrefix)
    Set a filter (using stopRow and startRow) so the result set only contains rows where the rowKey starts with the specified prefix.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setTimeRange(long minStamp, long maxStamp)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setTimeRange(long minStamp, long maxStamp)
    Get versions of columns only within the specified timestamp range, [minStamp, maxStamp).
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.setTimestamp(long timestamp)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.setTimestamp(long timestamp)
    Get versions of columns with the specified timestamp.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.withStartRow(byte[] startRow)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.withStartRow(byte[] startRow, boolean inclusive)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.withStartRow(byte[] startRow)
    Set the start row of the scan.
    org.apache.hadoop.hbase.client.Scan
    Scan.withStartRow(byte[] startRow, boolean inclusive)
    Set the start row of the scan.
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.withStopRow(byte[] stopRow)
     
    org.apache.hadoop.hbase.client.Scan
    ImmutableScan.withStopRow(byte[] stopRow, boolean inclusive)
     
    org.apache.hadoop.hbase.client.Scan
    Scan.withStopRow(byte[] stopRow)
    Set the stop row of the scan.
    org.apache.hadoop.hbase.client.Scan
    Scan.withStopRow(byte[] stopRow, boolean inclusive)
    Set the stop row of the scan.
    Methods in org.apache.hadoop.hbase.client that return types with arguments of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    Optional<org.apache.hadoop.hbase.client.Scan>
    OnlineLogRecord.getScan()
    If "hbase.slowlog.scan.payload.enabled" is enabled then this value may be present and should represent the Scan that produced the given OnlineLogRecord
    Methods in org.apache.hadoop.hbase.client with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    static org.apache.hadoop.hbase.client.ScanResultCache
    ConnectionUtils.createScanResultCache(org.apache.hadoop.hbase.client.Scan scan)
     
    static long
    PackagePrivateFieldAccessor.getMvccReadPoint(org.apache.hadoop.hbase.client.Scan scan)
     
    org.apache.hadoop.hbase.client.ResultScanner
    AsyncTable.getScanner(org.apache.hadoop.hbase.client.Scan scan)
    Returns a scanner on the current table as specified by the Scan object.
    default org.apache.hadoop.hbase.client.ResultScanner
    Table.getScanner(org.apache.hadoop.hbase.client.Scan scan)
    Returns a scanner on the current table as specified by the Scan object.
    protected void
    AbstractClientScanner.initScanMetrics(org.apache.hadoop.hbase.client.Scan scan)
    Check and initialize if application wants to collect scan metrics
    void
    AsyncTable.scan(org.apache.hadoop.hbase.client.Scan scan, C consumer)
    The scan API uses the observer pattern.
    CompletableFuture<List<org.apache.hadoop.hbase.client.Result>>
    AsyncTable.scanAll(org.apache.hadoop.hbase.client.Scan scan)
    Return all the results that match the given scan object.
    static void
    PackagePrivateFieldAccessor.setMvccReadPoint(org.apache.hadoop.hbase.client.Scan scan, long mvccReadPoint)
     
    org.apache.hadoop.hbase.client.OnlineLogRecord.OnlineLogRecordBuilder
    OnlineLogRecord.OnlineLogRecordBuilder.setScan(org.apache.hadoop.hbase.client.Scan scan)
     
    Constructors in org.apache.hadoop.hbase.client with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier
    Constructor
    Description
     
    ClientSideRegionScanner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path rootDir, org.apache.hadoop.hbase.client.TableDescriptor htd, org.apache.hadoop.hbase.client.RegionInfo hri, org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.client.metrics.ScanMetrics scanMetrics)
     
     
    ImmutableScan(org.apache.hadoop.hbase.client.Scan scan)
    Create Immutable instance of Scan from given Scan object
     
    Scan(org.apache.hadoop.hbase.client.Scan scan)
    Creates a new instance of this class while copying all values.
     
    TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path restoreDir, String snapshotName, org.apache.hadoop.hbase.client.Scan scan)
    Creates a TableSnapshotScanner.
     
    TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path rootDir, org.apache.hadoop.fs.Path restoreDir, String snapshotName, org.apache.hadoop.hbase.client.Scan scan)
     
     
    TableSnapshotScanner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.Path rootDir, org.apache.hadoop.fs.Path restoreDir, String snapshotName, org.apache.hadoop.hbase.client.Scan scan, boolean snapshotAlreadyRestored)
    Creates a TableSnapshotScanner.
  • Uses of org.apache.hadoop.hbase.client.Scan in org.apache.hadoop.hbase.client.trace

    Methods in org.apache.hadoop.hbase.client.trace with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    org.apache.hadoop.hbase.client.trace.TableOperationSpanBuilder
    TableOperationSpanBuilder.setOperation(org.apache.hadoop.hbase.client.Scan scan)
     
  • Uses of org.apache.hadoop.hbase.client.Scan in org.apache.hadoop.hbase.coprocessor

    Methods in org.apache.hadoop.hbase.coprocessor with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    default org.apache.hadoop.hbase.regionserver.RegionScanner
    RegionObserver.postScannerOpen(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c, org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.RegionScanner s)
    Called after the client opens a new scanner.
    default void
    RegionObserver.preScannerOpen(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment> c, org.apache.hadoop.hbase.client.Scan scan)
    Called before the client opens a new scanner.
  • Uses of org.apache.hadoop.hbase.client.Scan in org.apache.hadoop.hbase.io

    Methods in org.apache.hadoop.hbase.io with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    boolean
    HalfStoreFileReader.passesKeyRangeFilter(org.apache.hadoop.hbase.client.Scan scan)
     
  • Uses of org.apache.hadoop.hbase.client.Scan in org.apache.hadoop.hbase.mapred

    Method parameters in org.apache.hadoop.hbase.mapred with type arguments of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    static void
    TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<org.apache.hadoop.hbase.client.Scan>> snapshotScans, Class<? extends org.apache.hadoop.hbase.mapred.TableMap> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapred.JobConf job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir)
    Sets up the job for reading from one or more multiple table snapshots, with one or more scans per snapshot.
    static void
    MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration conf, Map<String,Collection<org.apache.hadoop.hbase.client.Scan>> snapshotScans, org.apache.hadoop.fs.Path restoreDir)
     
    Constructors in org.apache.hadoop.hbase.mapred with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier
    Constructor
    Description
     
    TableSnapshotRegionSplit(org.apache.hadoop.hbase.client.TableDescriptor htd, org.apache.hadoop.hbase.client.RegionInfo regionInfo, List<String> locations, org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.fs.Path restoreDir)
     
  • Uses of org.apache.hadoop.hbase.client.Scan in org.apache.hadoop.hbase.mapreduce

    Methods in org.apache.hadoop.hbase.mapreduce that return org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    static org.apache.hadoop.hbase.client.Scan
    TableMapReduceUtil.convertStringToScan(String base64)
    Converts the given Base64 string back into a Scan instance.
    static org.apache.hadoop.hbase.client.Scan
    TableInputFormat.createScanFromConfiguration(org.apache.hadoop.conf.Configuration conf)
    Sets up a Scan instance, applying settings from the configuration property constants defined in TableInputFormat.
    static org.apache.hadoop.hbase.client.Scan
    TableSnapshotInputFormatImpl.extractScanFromConf(org.apache.hadoop.conf.Configuration conf)
     
    org.apache.hadoop.hbase.client.Scan
    TableInputFormatBase.getScan()
    Gets the scan defining the actual details like columns etc.
    org.apache.hadoop.hbase.client.Scan
    TableSplit.getScan()
    Returns a Scan object from the stored string representation.
    Methods in org.apache.hadoop.hbase.mapreduce that return types with arguments of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    static org.apache.hadoop.hbase.util.Triple<org.apache.hadoop.hbase.TableName,org.apache.hadoop.hbase.client.Scan,org.apache.hadoop.fs.Path>
    ExportUtils.getArgumentsFromCommandLine(org.apache.hadoop.conf.Configuration conf, String[] args)
     
    protected List<org.apache.hadoop.hbase.client.Scan>
    MultiTableInputFormatBase.getScans()
    Allows subclasses to get the list of Scan objects.
    Map<String,Collection<org.apache.hadoop.hbase.client.Scan>>
    MultiTableSnapshotInputFormatImpl.getSnapshotsToScans(org.apache.hadoop.conf.Configuration conf)
    Retrieve the snapshot name -> list<scan> mapping pushed to configuration by MultiTableSnapshotInputFormatImpl.setSnapshotToScans(Configuration, Map)
    Methods in org.apache.hadoop.hbase.mapreduce with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    static void
    TableInputFormat.addColumns(org.apache.hadoop.hbase.client.Scan scan, byte[][] columns)
    Adds an array of columns specified using old format, family:qualifier.
    static String
    TableMapReduceUtil.convertScanToString(org.apache.hadoop.hbase.client.Scan scan)
    Writes the given scan into a Base64 encoded string.
    static List<org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatImpl.InputSplit>
    TableSnapshotInputFormatImpl.getSplits(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.snapshot.SnapshotManifest manifest, List<org.apache.hadoop.hbase.client.RegionInfo> regionManifests, org.apache.hadoop.fs.Path restoreDir, org.apache.hadoop.conf.Configuration conf)
     
    static List<org.apache.hadoop.hbase.mapreduce.TableSnapshotInputFormatImpl.InputSplit>
    TableSnapshotInputFormatImpl.getSplits(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.snapshot.SnapshotManifest manifest, List<org.apache.hadoop.hbase.client.RegionInfo> regionManifests, org.apache.hadoop.fs.Path restoreDir, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hbase.util.RegionSplitter.SplitAlgorithm sa, int numSplits)
     
    static void
    GroupingTableMapper.initJob(String table, org.apache.hadoop.hbase.client.Scan scan, String groupColumns, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, org.apache.hadoop.mapreduce.Job job)
    Use this before submitting a TableMap job.
    static void
    IdentityTableMapper.initJob(String table, org.apache.hadoop.hbase.client.Scan scan, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, org.apache.hadoop.mapreduce.Job job)
    Use this before submitting a TableMap job.
    static void
    TableMapReduceUtil.initTableMapperJob(byte[] table, org.apache.hadoop.hbase.client.Scan scan, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job)
    Use this before submitting a TableMap job.
    static void
    TableMapReduceUtil.initTableMapperJob(byte[] table, org.apache.hadoop.hbase.client.Scan scan, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars)
    Use this before submitting a TableMap job.
    static void
    TableMapReduceUtil.initTableMapperJob(byte[] table, org.apache.hadoop.hbase.client.Scan scan, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
    Use this before submitting a TableMap job.
    static void
    TableMapReduceUtil.initTableMapperJob(String table, org.apache.hadoop.hbase.client.Scan scan, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job)
    Use this before submitting a TableMap job.
    static void
    TableMapReduceUtil.initTableMapperJob(String table, org.apache.hadoop.hbase.client.Scan scan, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars)
    Use this before submitting a TableMap job.
    static void
    TableMapReduceUtil.initTableMapperJob(String table, org.apache.hadoop.hbase.client.Scan scan, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, boolean initCredentials, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
    Use this before submitting a TableMap job.
    static void
    TableMapReduceUtil.initTableMapperJob(String table, org.apache.hadoop.hbase.client.Scan scan, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, Class<? extends org.apache.hadoop.mapreduce.InputFormat> inputFormatClass)
    Use this before submitting a TableMap job.
    static void
    TableMapReduceUtil.initTableMapperJob(org.apache.hadoop.hbase.TableName table, org.apache.hadoop.hbase.client.Scan scan, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job)
    Use this before submitting a TableMap job.
    static void
    TableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName, org.apache.hadoop.hbase.client.Scan scan, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir)
    Sets up the job for reading from a table snapshot.
    static void
    TableMapReduceUtil.initTableSnapshotMapperJob(String snapshotName, org.apache.hadoop.hbase.client.Scan scan, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir, org.apache.hadoop.hbase.util.RegionSplitter.SplitAlgorithm splitAlgo, int numSplitsPerRegion)
    Sets up the job for reading from a table snapshot.
    void
    TableInputFormatBase.setScan(org.apache.hadoop.hbase.client.Scan scan)
    Sets the scan defining the actual details like columns etc.
    void
    TableRecordReader.setScan(org.apache.hadoop.hbase.client.Scan scan)
    Sets the scan defining the actual details like columns etc.
    void
    TableRecordReaderImpl.setScan(org.apache.hadoop.hbase.client.Scan scan)
    Sets the scan defining the actual details like columns etc.
    Method parameters in org.apache.hadoop.hbase.mapreduce with type arguments of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    static void
    TableMapReduceUtil.initMultiTableSnapshotMapperJob(Map<String,Collection<org.apache.hadoop.hbase.client.Scan>> snapshotScans, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, org.apache.hadoop.fs.Path tmpRestoreDir)
    Sets up the job for reading from one or more table snapshots, with one or more scans per snapshot.
    static void
    TableMapReduceUtil.initTableMapperJob(List<org.apache.hadoop.hbase.client.Scan> scans, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job)
    Use this before submitting a Multi TableMap job.
    static void
    TableMapReduceUtil.initTableMapperJob(List<org.apache.hadoop.hbase.client.Scan> scans, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars)
    Use this before submitting a Multi TableMap job.
    static void
    TableMapReduceUtil.initTableMapperJob(List<org.apache.hadoop.hbase.client.Scan> scans, Class<? extends org.apache.hadoop.hbase.mapreduce.TableMapper> mapper, Class<?> outputKeyClass, Class<?> outputValueClass, org.apache.hadoop.mapreduce.Job job, boolean addDependencyJars, boolean initCredentials)
    Use this before submitting a Multi TableMap job.
    static void
    MultiTableSnapshotInputFormat.setInput(org.apache.hadoop.conf.Configuration configuration, Map<String,Collection<org.apache.hadoop.hbase.client.Scan>> snapshotScans, org.apache.hadoop.fs.Path tmpRestoreDir)
     
    void
    MultiTableSnapshotInputFormatImpl.setInput(org.apache.hadoop.conf.Configuration conf, Map<String,Collection<org.apache.hadoop.hbase.client.Scan>> snapshotScans, org.apache.hadoop.fs.Path restoreDir)
    Configure conf to read from snapshotScans, with snapshots restored to a subdirectory of restoreDir.
    protected void
    MultiTableInputFormatBase.setScans(List<org.apache.hadoop.hbase.client.Scan> scans)
    Allows subclasses to set the list of Scan objects.
    void
    MultiTableSnapshotInputFormatImpl.setSnapshotToScans(org.apache.hadoop.conf.Configuration conf, Map<String,Collection<org.apache.hadoop.hbase.client.Scan>> snapshotScans)
    Push snapshotScans to conf (under the key MultiTableSnapshotInputFormatImpl.SNAPSHOT_TO_SCANS_KEY)
    Constructors in org.apache.hadoop.hbase.mapreduce with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier
    Constructor
    Description
     
    InputSplit(org.apache.hadoop.hbase.client.TableDescriptor htd, org.apache.hadoop.hbase.client.RegionInfo regionInfo, List<String> locations, org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.fs.Path restoreDir)
     
     
    TableSnapshotRegionSplit(org.apache.hadoop.hbase.client.TableDescriptor htd, org.apache.hadoop.hbase.client.RegionInfo regionInfo, List<String> locations, org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.fs.Path restoreDir)
     
     
    TableSplit(org.apache.hadoop.hbase.TableName tableName, org.apache.hadoop.hbase.client.Scan scan, byte[] startRow, byte[] endRow, String location)
    Creates a new instance while assigning all variables.
     
    TableSplit(org.apache.hadoop.hbase.TableName tableName, org.apache.hadoop.hbase.client.Scan scan, byte[] startRow, byte[] endRow, String location, long length)
    Creates a new instance while assigning all variables.
     
    TableSplit(org.apache.hadoop.hbase.TableName tableName, org.apache.hadoop.hbase.client.Scan scan, byte[] startRow, byte[] endRow, String location, String encodedRegionName, long length)
    Creates a new instance while assigning all variables.
  • Uses of org.apache.hadoop.hbase.client.Scan in org.apache.hadoop.hbase.mob

    Methods in org.apache.hadoop.hbase.mob with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    static boolean
    MobUtils.isCacheMobBlocks(org.apache.hadoop.hbase.client.Scan scan)
    Indicates whether the scan contains the information of caching blocks.
    static boolean
    MobUtils.isRawMobScan(org.apache.hadoop.hbase.client.Scan scan)
    Indicates whether it's a raw scan.
    static boolean
    MobUtils.isReadEmptyValueOnMobCellMiss(org.apache.hadoop.hbase.client.Scan scan)
    Indicates whether return null value when the mob file is missing or corrupt.
    static boolean
    MobUtils.isRefOnlyScan(org.apache.hadoop.hbase.client.Scan scan)
    Indicates whether it's a reference only scan.
    static void
    MobUtils.setCacheMobBlocks(org.apache.hadoop.hbase.client.Scan scan, boolean cacheBlocks)
    Sets the attribute of caching blocks in the scan.
  • Uses of org.apache.hadoop.hbase.client.Scan in org.apache.hadoop.hbase.quotas

    Methods in org.apache.hadoop.hbase.quotas that return org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    static org.apache.hadoop.hbase.client.Scan
    QuotaTableUtil.makeQuotaSnapshotScan()
    Creates a Scan which returns only quota snapshots from the quota table.
    static org.apache.hadoop.hbase.client.Scan
    QuotaTableUtil.makeQuotaSnapshotScanForTable(org.apache.hadoop.hbase.TableName tn)
    Creates a Scan which returns only SpaceQuotaSnapshot from the quota table for a specific table.
    static org.apache.hadoop.hbase.client.Scan
    QuotaTableUtil.makeScan(org.apache.hadoop.hbase.quotas.QuotaFilter filter)
     
    Constructors in org.apache.hadoop.hbase.quotas with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier
    Constructor
    Description
     
    QuotaRetriever(org.apache.hadoop.hbase.client.Connection conn, org.apache.hadoop.hbase.client.Scan scan)
     
  • Uses of org.apache.hadoop.hbase.client.Scan in org.apache.hadoop.hbase.regionserver

    Subclasses of org.apache.hadoop.hbase.client.Scan in in org.apache.hadoop.hbase.regionserver
    Modifier and Type
    Class
    Description
    class 
    org.apache.hadoop.hbase.regionserver.InternalScan
    Special scanner, currently used for increment operations to allow additional server-side arguments for Scan operations.
    Methods in org.apache.hadoop.hbase.regionserver that return org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    org.apache.hadoop.hbase.client.Scan
    CustomizedScanInfoBuilder.getScan()
     
    org.apache.hadoop.hbase.client.Scan
    ScanOptions.getScan()
    Returns a copy of the Scan object.
    Methods in org.apache.hadoop.hbase.regionserver with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    protected org.apache.hadoop.hbase.regionserver.KeyValueScanner
    HMobStore.createScanner(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, NavigableSet<byte[]> targetCols, long readPt)
    Gets the MobStoreScanner or MobReversedStoreScanner.
    protected org.apache.hadoop.hbase.regionserver.KeyValueScanner
    HStore.createScanner(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, NavigableSet<byte[]> targetCols, long readPt)
     
    org.apache.hadoop.hbase.regionserver.RegionScannerImpl
    HRegion.getScanner(org.apache.hadoop.hbase.client.Scan scan)
     
    org.apache.hadoop.hbase.regionserver.RegionScannerImpl
    HRegion.getScanner(org.apache.hadoop.hbase.client.Scan scan, List<org.apache.hadoop.hbase.regionserver.KeyValueScanner> additionalScanners)
     
    org.apache.hadoop.hbase.regionserver.KeyValueScanner
    HStore.getScanner(org.apache.hadoop.hbase.client.Scan scan, NavigableSet<byte[]> targetCols, long readPt)
    Return a scanner for both the memstore and the HStore files.
    org.apache.hadoop.hbase.regionserver.RegionScanner
    Region.getScanner(org.apache.hadoop.hbase.client.Scan scan)
    Return an iterator that scans over the HRegion, returning the indicated columns and rows specified by the Scan.
    org.apache.hadoop.hbase.regionserver.RegionScanner
    Region.getScanner(org.apache.hadoop.hbase.client.Scan scan, List<org.apache.hadoop.hbase.regionserver.KeyValueScanner> additionalScanners)
    Return an iterator that scans over the HRegion, returning the indicated columns and rows specified by the Scan.
    protected org.apache.hadoop.hbase.regionserver.RegionScannerImpl
    HRegion.instantiateRegionScanner(org.apache.hadoop.hbase.client.Scan scan, List<org.apache.hadoop.hbase.regionserver.KeyValueScanner> additionalScanners, long nonceGroup, long nonce)
     
    boolean
    StoreFileReader.passesKeyRangeFilter(org.apache.hadoop.hbase.client.Scan scan)
    Checks whether the given scan rowkey range overlaps with the current storefile's
    org.apache.hadoop.hbase.regionserver.RegionScanner
    RegionCoprocessorHost.postScannerOpen(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.RegionScanner s)
     
    void
    RegionCoprocessorHost.preScannerOpen(org.apache.hadoop.hbase.client.Scan scan)
     
    org.apache.hadoop.hbase.regionserver.ScanInfo
    RegionCoprocessorHost.preStoreScannerOpen(org.apache.hadoop.hbase.regionserver.HStore store, org.apache.hadoop.hbase.client.Scan scan)
    Called before open store scanner for user scan.
    boolean
    KeyValueScanner.shouldUseScanner(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.HStore store, long oldestUnexpiredTS)
    Allows to filter out scanners (both StoreFile and memstore) that we don't want to use based on criteria such as Bloom filters and timestamp ranges.
    boolean
    NonLazyKeyValueScanner.shouldUseScanner(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.HStore store, long oldestUnexpiredTS)
     
    boolean
    SegmentScanner.shouldUseScanner(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.HStore store, long oldestUnexpiredTS)
    This functionality should be resolved in the higher level which is MemStoreScanner, currently returns true as default.
    boolean
    StoreFileScanner.shouldUseScanner(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.HStore store, long oldestUnexpiredTS)
     
    Constructors in org.apache.hadoop.hbase.regionserver with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier
    Constructor
    Description
     
    CustomizedScanInfoBuilder(org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, org.apache.hadoop.hbase.client.Scan scan)
     
     
    InternalScan(org.apache.hadoop.hbase.client.Scan scan)
     
     
    MobStoreScanner(org.apache.hadoop.hbase.regionserver.HStore store, org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, org.apache.hadoop.hbase.client.Scan scan, NavigableSet<byte[]> columns, long readPt)
     
     
    ReversedStoreScanner(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, NavigableSet<byte[]> columns, List<? extends org.apache.hadoop.hbase.regionserver.KeyValueScanner> scanners)
    Constructor for testing.
     
    ReversedStoreScanner(org.apache.hadoop.hbase.regionserver.HStore store, org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, org.apache.hadoop.hbase.client.Scan scan, NavigableSet<byte[]> columns, long readPt)
    Opens a scanner across memstore, snapshot, and all StoreFiles.
     
    StoreScanner(org.apache.hadoop.hbase.regionserver.HStore store, org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, org.apache.hadoop.hbase.client.Scan scan, NavigableSet<byte[]> columns, long readPt)
    Opens a scanner across memstore, snapshot, and all StoreFiles.
  • Uses of org.apache.hadoop.hbase.client.Scan in org.apache.hadoop.hbase.regionserver.querymatcher

    Methods in org.apache.hadoop.hbase.regionserver.querymatcher with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    static org.apache.hadoop.hbase.regionserver.querymatcher.NormalUserScanQueryMatcher
    NormalUserScanQueryMatcher.create(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, org.apache.hadoop.hbase.regionserver.querymatcher.ColumnTracker columns, org.apache.hadoop.hbase.regionserver.querymatcher.DeleteTracker deletes, boolean hasNullColumn, long oldestUnexpiredTS, long now)
     
    static org.apache.hadoop.hbase.regionserver.querymatcher.RawScanQueryMatcher
    RawScanQueryMatcher.create(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, org.apache.hadoop.hbase.regionserver.querymatcher.ColumnTracker columns, boolean hasNullColumn, long oldestUnexpiredTS, long now)
     
    static org.apache.hadoop.hbase.regionserver.querymatcher.UserScanQueryMatcher
    UserScanQueryMatcher.create(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, NavigableSet<byte[]> columns, long oldestUnexpiredTS, long now, org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost regionCoprocessorHost)
     
    protected static org.apache.hadoop.hbase.util.Pair<org.apache.hadoop.hbase.regionserver.querymatcher.DeleteTracker,org.apache.hadoop.hbase.regionserver.querymatcher.ColumnTracker>
    ScanQueryMatcher.getTrackers(org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost host, NavigableSet<byte[]> columns, org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, long oldestUnexpiredTS, org.apache.hadoop.hbase.client.Scan userScan)
     
    Constructors in org.apache.hadoop.hbase.regionserver.querymatcher with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier
    Constructor
    Description
    protected
    NormalUserScanQueryMatcher(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, org.apache.hadoop.hbase.regionserver.querymatcher.ColumnTracker columns, boolean hasNullColumn, org.apache.hadoop.hbase.regionserver.querymatcher.DeleteTracker deletes, long oldestUnexpiredTS, long now)
     
    protected
    RawScanQueryMatcher(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, org.apache.hadoop.hbase.regionserver.querymatcher.ColumnTracker columns, boolean hasNullColumn, long oldestUnexpiredTS, long now)
     
    protected
    UserScanQueryMatcher(org.apache.hadoop.hbase.client.Scan scan, org.apache.hadoop.hbase.regionserver.ScanInfo scanInfo, org.apache.hadoop.hbase.regionserver.querymatcher.ColumnTracker columns, boolean hasNullColumn, long oldestUnexpiredTS, long now)
     
  • Uses of org.apache.hadoop.hbase.client.Scan in org.apache.hadoop.hbase.shaded.protobuf

    Methods in org.apache.hadoop.hbase.shaded.protobuf that return org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    static org.apache.hadoop.hbase.client.Scan
    ProtobufUtil.toScan(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.Scan proto)
    Convert a protocol buffer Scan to a client Scan
    Methods in org.apache.hadoop.hbase.shaded.protobuf with parameters of type org.apache.hadoop.hbase.client.Scan in in
    Modifier and Type
    Method
    Description
    static org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ScanRequest
    RequestConverter.buildScanRequest(byte[] regionName, org.apache.hadoop.hbase.client.Scan scan, int numberOfRows, boolean closeScanner)
    Create a protocol buffer ScanRequest for a client Scan
    static org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.Scan
    ProtobufUtil.toScan(org.apache.hadoop.hbase.client.Scan scan)
    Convert a client Scan to a protocol buffer Scan