Package | Description |
---|---|
org.apache.hadoop.hbase | |
org.apache.hadoop.hbase.client |
Provides HBase Client
|
org.apache.hadoop.hbase.ipc |
Tools to help define network clients and servers.
|
org.apache.hadoop.hbase.master | |
org.apache.hadoop.hbase.master.handler | |
org.apache.hadoop.hbase.master.procedure | |
org.apache.hadoop.hbase.procedure2 | |
org.apache.hadoop.hbase.regionserver | |
org.apache.hadoop.hbase.regionserver.compactions | |
org.apache.hadoop.hbase.security | |
org.apache.hadoop.hbase.security.access | |
org.apache.hadoop.hbase.security.token | |
org.apache.hadoop.hbase.security.visibility | |
org.apache.hadoop.hbase.snapshot |
Modifier and Type | Method and Description |
---|---|
JVMClusterUtil.MasterThread |
LocalHBaseCluster.addMaster(org.apache.hadoop.conf.Configuration c,
int index,
User user) |
JVMClusterUtil.RegionServerThread |
LocalHBaseCluster.addRegionServer(org.apache.hadoop.conf.Configuration config,
int index,
User user) |
static boolean |
ProcedureInfo.isProcedureOwner(ProcedureInfo procInfo,
User user)
Check if the user is this procedure's owner
|
HTableDescriptor |
HTableDescriptor.setOwner(User owner)
Deprecated.
|
Modifier and Type | Field and Description |
---|---|
protected User |
ConnectionManager.HConnectionImplementation.user |
Modifier and Type | Method and Description |
---|---|
(package private) static ClusterConnection |
ConnectionManager.createConnection(org.apache.hadoop.conf.Configuration conf,
boolean managed,
ExecutorService pool,
User user)
Deprecated.
|
(package private) static Connection |
ConnectionFactory.createConnection(org.apache.hadoop.conf.Configuration conf,
boolean managed,
ExecutorService pool,
User user) |
(package private) static ClusterConnection |
HConnectionManager.createConnection(org.apache.hadoop.conf.Configuration conf,
boolean managed,
ExecutorService pool,
User user)
Deprecated.
|
static HConnection |
ConnectionManager.createConnection(org.apache.hadoop.conf.Configuration conf,
ExecutorService pool,
User user)
Create a new HConnection instance using the passed
conf instance. |
static Connection |
ConnectionFactory.createConnection(org.apache.hadoop.conf.Configuration conf,
ExecutorService pool,
User user)
Create a new Connection instance using the passed
conf instance. |
static HConnection |
HConnectionManager.createConnection(org.apache.hadoop.conf.Configuration conf,
ExecutorService pool,
User user)
Deprecated.
|
static HConnection |
ConnectionManager.createConnection(org.apache.hadoop.conf.Configuration conf,
User user)
Create a new HConnection instance using the passed
conf instance. |
static Connection |
ConnectionFactory.createConnection(org.apache.hadoop.conf.Configuration conf,
User user)
Create a new Connection instance using the passed
conf instance. |
static HConnection |
HConnectionManager.createConnection(org.apache.hadoop.conf.Configuration conf,
User user)
Deprecated.
|
static ClusterConnection |
ConnectionUtils.createShortCircuitConnection(org.apache.hadoop.conf.Configuration conf,
ExecutorService pool,
User user,
ServerName serverName,
org.apache.hadoop.hbase.protobuf.generated.AdminProtos.AdminService.BlockingInterface admin,
org.apache.hadoop.hbase.protobuf.generated.ClientProtos.ClientService.BlockingInterface client)
Creates a short-circuit connection that can bypass the RPC layer (serialization,
deserialization, networking, etc..) when talking to a local server.
|
Constructor and Description |
---|
ConnectionManager.HConnectionImplementation(org.apache.hadoop.conf.Configuration conf,
boolean managed,
ExecutorService pool,
User user) |
ConnectionManager.HConnectionImplementation(org.apache.hadoop.conf.Configuration conf,
boolean managed,
ExecutorService pool,
User user,
String clusterId)
constructor
|
ConnectionUtils.MasterlessConnection(org.apache.hadoop.conf.Configuration conf,
boolean managed,
ExecutorService pool,
User user) |
Modifier and Type | Field and Description |
---|---|
private User |
AsyncRpcClient.RpcChannelImplementation.ticket |
(package private) User |
AsyncRpcChannel.ticket |
(package private) User |
ConnectionId.ticket |
private User |
AbstractRpcClient.BlockingRpcChannelImplementation.ticket |
private User |
RpcServer.Call.user |
protected User |
RpcServer.Connection.user |
Modifier and Type | Method and Description |
---|---|
static User |
RpcServer.getRequestUser()
Returns the user credentials associated with the current RPC request or
null if no credentials were provided. |
User |
RpcServer.Call.getRequestUser() |
User |
RpcCallContext.getRequestUser()
Returns the user credentials associated with the current RPC request or
null if no credentials were provided. |
User |
ConnectionId.getTicket() |
Modifier and Type | Method and Description |
---|---|
protected Pair<com.google.protobuf.Message,CellScanner> |
RpcClientImpl.call(PayloadCarryingRpcController pcrc,
com.google.protobuf.Descriptors.MethodDescriptor md,
com.google.protobuf.Message param,
com.google.protobuf.Message returnType,
User ticket,
InetSocketAddress addr,
MetricsConnection.CallStats callStats)
Make a call, passing
param , to the IPC server running at
address which is servicing the protocol protocol,
with the ticket credentials, returning the value. |
protected Pair<com.google.protobuf.Message,CellScanner> |
AsyncRpcClient.call(PayloadCarryingRpcController pcrc,
com.google.protobuf.Descriptors.MethodDescriptor md,
com.google.protobuf.Message param,
com.google.protobuf.Message returnType,
User ticket,
InetSocketAddress addr,
MetricsConnection.CallStats callStats)
Make a call, passing
param , to the IPC server running at
address which is servicing the protocol protocol,
with the ticket credentials, returning the value. |
protected abstract Pair<com.google.protobuf.Message,CellScanner> |
AbstractRpcClient.call(PayloadCarryingRpcController pcrc,
com.google.protobuf.Descriptors.MethodDescriptor md,
com.google.protobuf.Message param,
com.google.protobuf.Message returnType,
User ticket,
InetSocketAddress isa,
MetricsConnection.CallStats callStats)
Make a call, passing
param , to the IPC server running at
address which is servicing the protocol protocol,
with the ticket credentials, returning the value. |
(package private) com.google.protobuf.Message |
AbstractRpcClient.callBlockingMethod(com.google.protobuf.Descriptors.MethodDescriptor md,
PayloadCarryingRpcController pcrc,
com.google.protobuf.Message param,
com.google.protobuf.Message returnType,
User ticket,
InetSocketAddress isa)
Make a blocking call.
|
private void |
AsyncRpcClient.callMethod(com.google.protobuf.Descriptors.MethodDescriptor md,
PayloadCarryingRpcController pcrc,
com.google.protobuf.Message param,
com.google.protobuf.Message returnType,
User ticket,
InetSocketAddress addr,
com.google.protobuf.RpcCallback<com.google.protobuf.Message> done)
Call method async
|
com.google.protobuf.BlockingRpcChannel |
RpcClient.createBlockingRpcChannel(ServerName sn,
User user,
int rpcTimeout)
Creates a "channel" that can be used by a blocking protobuf service.
|
com.google.protobuf.BlockingRpcChannel |
AbstractRpcClient.createBlockingRpcChannel(ServerName sn,
User ticket,
int defaultOperationTimeout) |
com.google.protobuf.RpcChannel |
AsyncRpcClient.createRpcChannel(ServerName sn,
User user,
int rpcTimeout)
Creates a "channel" that can be used by a protobuf service.
|
private AsyncRpcChannel |
AsyncRpcClient.createRpcChannel(String serviceName,
InetSocketAddress location,
User ticket)
Creates an RPC client
|
protected RpcClientImpl.Connection |
RpcClientImpl.getConnection(User ticket,
Call call,
InetSocketAddress addr)
Get a connection from the pool, or create a new one and add it to the
pool.
|
int |
PriorityFunction.getPriority(org.apache.hadoop.hbase.protobuf.generated.RPCProtos.RequestHeader header,
com.google.protobuf.Message param,
User user)
Returns the 'priority type' of the specified request.
|
static int |
ConnectionId.hashCode(User ticket,
String serviceName,
InetSocketAddress address) |
Constructor and Description |
---|
AbstractRpcClient.BlockingRpcChannelImplementation(AbstractRpcClient rpcClient,
ServerName sn,
User ticket,
int channelOperationTimeout) |
AsyncRpcChannel(io.netty.bootstrap.Bootstrap bootstrap,
AsyncRpcClient client,
User ticket,
String serviceName,
InetSocketAddress address)
Constructor for netty RPC channel
|
AsyncRpcClient.RpcChannelImplementation(AsyncRpcClient rpcClient,
ServerName sn,
User ticket,
int channelOperationTimeout) |
ConnectionId(User ticket,
String serviceName,
InetSocketAddress address) |
Modifier and Type | Method and Description |
---|---|
int |
MasterAnnotationReadingPriorityFunction.getPriority(org.apache.hadoop.hbase.protobuf.generated.RPCProtos.RequestHeader header,
com.google.protobuf.Message param,
User user) |
Modifier and Type | Field and Description |
---|---|
private User |
CreateTableHandler.activeUser |
Modifier and Type | Method and Description |
---|---|
User |
MasterProcedureEnv.getRequestUser() |
Modifier and Type | Method and Description |
---|---|
boolean |
ProcedureExecutor.isProcedureOwner(long procId,
User user)
Check if the user is this procedure's owner
|
void |
ProcedureExecutor.setFailureResultForNonce(NonceKey nonceKey,
String procName,
User procOwner,
IOException exception)
If the failure failed before submitting it, we may want to give back the
same error to the requests with the same nonceKey.
|
Modifier and Type | Field and Description |
---|---|
private User |
SplitRequest.user |
private User |
RegionMergeRequest.user |
private User |
CompactSplitThread.CompactionRunner.user |
Modifier and Type | Method and Description |
---|---|
List<StoreFile> |
Store.compact(CompactionContext compaction,
CompactionThroughputController throughputController,
User user) |
List<StoreFile> |
HStore.compact(CompactionContext compaction,
CompactionThroughputController throughputController,
User user) |
boolean |
HRegion.compact(CompactionContext compaction,
Store store,
CompactionThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
StripeStoreEngine.StripeCompaction.compact(CompactionThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
DefaultStoreEngine.DefaultCompactionContext.compact(CompactionThroughputController throughputController,
User user) |
(package private) PairOfSameType<Region> |
SplitTransactionImpl.createDaughters(Server server,
RegionServerServices services,
User user)
Prepare the regions and region files.
|
(package private) HRegion |
RegionMergeTransactionImpl.createMergedRegion(Server server,
RegionServerServices services,
User user)
Prepare the merged region and region files.
|
private void |
CompactSplitThread.CompactionRunner.doCompaction(User user) |
private void |
SplitRequest.doSplitting(User user) |
Region |
RegionMergeTransaction.execute(Server server,
RegionServerServices services,
User user)
Run the transaction.
|
PairOfSameType<Region> |
SplitTransaction.execute(Server server,
RegionServerServices services,
User user)
Run the transaction.
|
PairOfSameType<Region> |
SplitTransactionImpl.execute(Server server,
RegionServerServices services,
User user)
Run the transaction.
|
HRegion |
RegionMergeTransactionImpl.execute(Server server,
RegionServerServices services,
User user) |
int |
RSRpcServices.getPriority(org.apache.hadoop.hbase.protobuf.generated.RPCProtos.RequestHeader header,
com.google.protobuf.Message param,
User user) |
int |
AnnotationReadingPriorityFunction.getPriority(org.apache.hadoop.hbase.protobuf.generated.RPCProtos.RequestHeader header,
com.google.protobuf.Message param,
User user)
Returns a 'priority' based on the request type.
|
private List<StoreFile> |
HStore.moveCompatedFilesIntoPlace(CompactionRequest cr,
List<org.apache.hadoop.fs.Path> newFiles,
User user) |
CompactionContext |
Store.requestCompaction(int priority,
CompactionRequest baseRequest,
User user) |
CompactionContext |
HStore.requestCompaction(int priority,
CompactionRequest baseRequest,
User user) |
CompactionRequest |
CompactSplitThread.requestCompaction(Region r,
Store s,
String why,
int priority,
CompactionRequest request,
User user) |
CompactionRequest |
CompactionRequestor.requestCompaction(Region r,
Store s,
String why,
int pri,
CompactionRequest request,
User user) |
List<CompactionRequest> |
CompactSplitThread.requestCompaction(Region r,
String why,
int p,
List<Pair<CompactionRequest,Store>> requests,
User user) |
List<CompactionRequest> |
CompactionRequestor.requestCompaction(Region r,
String why,
int pri,
List<Pair<CompactionRequest,Store>> requests,
User user) |
private CompactionRequest |
CompactSplitThread.requestCompactionInternal(Region r,
Store s,
String why,
int priority,
CompactionRequest request,
boolean selectNow,
User user) |
private List<CompactionRequest> |
CompactSplitThread.requestCompactionInternal(Region r,
String why,
int p,
List<Pair<CompactionRequest,Store>> requests,
boolean selectNow,
User user) |
void |
CompactSplitThread.requestRegionsMerge(Region a,
Region b,
boolean forcible,
long masterSystemTime,
User user) |
void |
CompactSplitThread.requestSplit(Region r,
byte[] midKey,
User user) |
boolean |
RegionMergeTransaction.rollback(Server server,
RegionServerServices services,
User user)
Roll back a failed transaction
|
boolean |
SplitTransaction.rollback(Server server,
RegionServerServices services,
User user)
Roll back a failed transaction
|
boolean |
SplitTransactionImpl.rollback(Server server,
RegionServerServices services,
User user) |
boolean |
RegionMergeTransactionImpl.rollback(Server server,
RegionServerServices services,
User user) |
private CompactionContext |
CompactSplitThread.selectCompaction(Region r,
Store s,
int priority,
CompactionRequest request,
User user) |
void |
RegionMergeTransactionImpl.stepsAfterPONR(Server server,
RegionServerServices services,
HRegion mergedRegion,
User user) |
PairOfSameType<Region> |
SplitTransactionImpl.stepsAfterPONR(Server server,
RegionServerServices services,
PairOfSameType<Region> regions,
User user) |
Constructor and Description |
---|
CompactSplitThread.CompactionRunner(Store store,
Region region,
CompactionContext compaction,
ThreadPoolExecutor parent,
User user) |
RegionMergeRequest(Region a,
Region b,
HRegionServer hrs,
boolean forcible,
long masterSystemTime,
User user) |
SplitRequest(Region region,
byte[] midKey,
HRegionServer hrs,
User user) |
Modifier and Type | Method and Description |
---|---|
List<org.apache.hadoop.fs.Path> |
DefaultCompactor.compact(CompactionRequest request,
CompactionThroughputController throughputController,
User user)
Do a minor/major compaction on an explicit set of storefiles from a Store.
|
List<org.apache.hadoop.fs.Path> |
StripeCompactor.compact(CompactionRequest request,
int targetCount,
long targetSize,
byte[] left,
byte[] right,
byte[] majorRangeFromRow,
byte[] majorRangeToRow,
CompactionThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
StripeCompactor.compact(CompactionRequest request,
List<byte[]> targetBoundaries,
byte[] majorRangeFromRow,
byte[] majorRangeToRow,
CompactionThroughputController throughputController,
User user) |
abstract List<org.apache.hadoop.fs.Path> |
CompactionContext.compact(CompactionThroughputController throughputController,
User user) |
private List<org.apache.hadoop.fs.Path> |
StripeCompactor.compactInternal(StripeMultiFileWriter mw,
CompactionRequest request,
byte[] majorRangeFromRow,
byte[] majorRangeToRow,
CompactionThroughputController throughputController,
User user) |
abstract List<org.apache.hadoop.fs.Path> |
StripeCompactionPolicy.StripeCompactionRequest.execute(StripeCompactor compactor,
CompactionThroughputController throughputController,
User user)
Executes the request against compactor (essentially, just calls correct overload of
compact method), to simulate more dynamic dispatch.
|
List<org.apache.hadoop.fs.Path> |
StripeCompactionPolicy.BoundaryStripeCompactionRequest.execute(StripeCompactor compactor,
CompactionThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
StripeCompactionPolicy.SplitStripeCompactionRequest.execute(StripeCompactor compactor,
CompactionThroughputController throughputController,
User user) |
protected InternalScanner |
Compactor.postCreateCoprocScanner(CompactionRequest request,
ScanType scanType,
InternalScanner scanner,
User user)
Calls coprocessor, if any, to create scanners - after normal scanner creation.
|
protected InternalScanner |
Compactor.preCreateCoprocScanner(CompactionRequest request,
ScanType scanType,
long earliestPutTs,
List<StoreFileScanner> scanners,
User user) |
Modifier and Type | Class and Description |
---|---|
static class |
User.SecureHadoopUser
Bridges
User invocations to underlying calls to
UserGroupInformation for secure Hadoop
0.20 and versions 0.21 and above. |
Modifier and Type | Field and Description |
---|---|
private static User |
Superusers.systemUser |
Modifier and Type | Method and Description |
---|---|
User |
UserProvider.create(org.apache.hadoop.security.UserGroupInformation ugi)
Wraps an underlying
UserGroupInformation instance. |
static User |
User.create(org.apache.hadoop.security.UserGroupInformation ugi)
Wraps an underlying
UserGroupInformation instance. |
static User |
User.createUserForTesting(org.apache.hadoop.conf.Configuration conf,
String name,
String[] groups)
Generates a new
User instance specifically for use in test code. |
static User |
User.SecureHadoopUser.createUserForTesting(org.apache.hadoop.conf.Configuration conf,
String name,
String[] groups) |
User |
UserProvider.getCurrent() |
static User |
User.getCurrent()
Returns the
User instance within current execution context. |
static User |
Superusers.getSystemUser() |
Modifier and Type | Method and Description |
---|---|
static boolean |
Superusers.isSuperUser(User user) |
Modifier and Type | Field and Description |
---|---|
private User |
AccessControlFilter.user |
private User |
AuthResult.user |
Modifier and Type | Method and Description |
---|---|
private User |
SecureBulkLoadEndpoint.getActiveUser() |
private User |
AccessController.getActiveUser()
Returns the active user to which authorization checks should be applied.
|
User |
AuthResult.getUser() |
Modifier and Type | Method and Description |
---|---|
static AuthResult |
AuthResult.allow(String request,
String reason,
User user,
Permission.Action action,
String namespace) |
static AuthResult |
AuthResult.allow(String request,
String reason,
User user,
Permission.Action action,
TableName table,
byte[] family,
byte[] qualifier) |
static AuthResult |
AuthResult.allow(String request,
String reason,
User user,
Permission.Action action,
TableName table,
Map<byte[],? extends Collection<?>> families) |
boolean |
TableAuthManager.authorize(User user,
Permission.Action action)
Authorize a global permission based on ACLs for the given user and the
user's groups.
|
boolean |
TableAuthManager.authorize(User user,
String namespace,
Permission.Action action) |
boolean |
TableAuthManager.authorize(User user,
TableName table,
byte[] family,
byte[] qualifier,
Permission.Action action) |
boolean |
TableAuthManager.authorize(User user,
TableName table,
byte[] family,
Permission.Action action) |
boolean |
TableAuthManager.authorize(User user,
TableName table,
Cell cell,
Permission.Action action)
Authorize a user for a given KV.
|
boolean |
TableAuthManager.authorizeUser(User user,
TableName table,
byte[] family,
byte[] qualifier,
Permission.Action action) |
boolean |
TableAuthManager.authorizeUser(User user,
TableName table,
byte[] family,
Permission.Action action)
Checks authorization to a given table and column family for a user, based on the
stored user permissions.
|
private void |
AccessController.checkForReservedTagPresence(User user,
Mutation m) |
private org.apache.hadoop.fs.Path |
SecureBulkLoadEndpoint.createStagingDir(org.apache.hadoop.fs.Path baseDir,
User user,
String randomDir) |
private org.apache.hadoop.fs.Path |
SecureBulkLoadEndpoint.createStagingDir(org.apache.hadoop.fs.Path baseDir,
User user,
TableName tableName) |
static AuthResult |
AuthResult.deny(String request,
String reason,
User user,
Permission.Action action,
String namespace) |
static AuthResult |
AuthResult.deny(String request,
String reason,
User user,
Permission.Action action,
TableName table,
byte[] family,
byte[] qualifier) |
static AuthResult |
AuthResult.deny(String request,
String reason,
User user,
Permission.Action action,
TableName table,
Map<byte[],? extends Collection<?>> families) |
static List<Permission> |
AccessControlLists.getCellPermissionsForUser(User user,
Cell cell) |
boolean |
TableAuthManager.hasAccess(User user,
TableName table,
Permission.Action action) |
private boolean |
AccessController.hasFamilyQualifierPermission(User user,
Permission.Action perm,
RegionCoprocessorEnvironment env,
Map<byte[],? extends Collection<byte[]>> familyMap)
Returns
true if the current user is allowed the given action
over at least one of the column qualifiers in the given column families. |
boolean |
TableAuthManager.matchPermission(User user,
TableName table,
byte[] family,
byte[] qualifier,
Permission.Action action) |
boolean |
TableAuthManager.matchPermission(User user,
TableName table,
byte[] family,
Permission.Action action)
Returns true if the given user has a
TablePermission matching up
to the column family portion of a permission. |
(package private) AuthResult |
AccessController.permissionGranted(AccessController.OpType opType,
User user,
RegionCoprocessorEnvironment e,
Map<byte[],? extends Collection<?>> families,
Permission.Action... actions)
Check the current user for authorization to perform a specific action
against the given set of row data.
|
(package private) AuthResult |
AccessController.permissionGranted(String request,
User user,
Permission.Action permRequest,
RegionCoprocessorEnvironment e,
Map<byte[],? extends Collection<?>> families)
Check the current user for authorization to perform a specific action
against the given set of row data.
|
boolean |
TableAuthManager.userHasAccess(User user,
TableName table,
Permission.Action action)
Checks if the user has access to the full table or at least a family/qualifier
for the specified action.
|
Constructor and Description |
---|
AccessControlFilter(TableAuthManager mgr,
User ugi,
TableName tableName,
AccessControlFilter.Strategy strategy,
Map<ByteRange,Integer> cfVsMaxVersions) |
AuthResult(boolean allowed,
String request,
String reason,
User user,
Permission.Action action,
String namespace) |
AuthResult(boolean allowed,
String request,
String reason,
User user,
Permission.Action action,
TableName table,
byte[] family,
byte[] qualifier) |
AuthResult(boolean allowed,
String request,
String reason,
User user,
Permission.Action action,
TableName table,
Map<byte[],? extends Collection<?>> families) |
Modifier and Type | Method and Description |
---|---|
static void |
TokenUtil.addTokenForJob(Connection conn,
org.apache.hadoop.mapred.JobConf job,
User user)
Checks for an authentication token for the given user, obtaining a new token if necessary,
and adds it to the credentials for the given map reduce job.
|
static void |
TokenUtil.addTokenForJob(Connection conn,
User user,
org.apache.hadoop.mapreduce.Job job)
Checks for an authentication token for the given user, obtaining a new token if necessary,
and adds it to the credentials for the given map reduce job.
|
static boolean |
TokenUtil.addTokenIfMissing(Connection conn,
User user)
Checks if an authentication tokens exists for the connected cluster,
obtaining one if needed and adding it to the user's credentials.
|
private static org.apache.hadoop.security.token.Token<AuthenticationTokenIdentifier> |
TokenUtil.getAuthToken(org.apache.hadoop.conf.Configuration conf,
User user)
Get the authentication token of the user for the cluster specified in the configuration
|
static void |
TokenUtil.obtainAndCacheToken(Connection conn,
User user)
Obtain an authentication token for the given user and add it to the
user's credentials.
|
static org.apache.hadoop.security.token.Token<AuthenticationTokenIdentifier> |
TokenUtil.obtainToken(Connection conn,
User user)
Obtain and return an authentication token for the current user.
|
static void |
TokenUtil.obtainTokenForJob(Connection conn,
org.apache.hadoop.mapred.JobConf job,
User user)
Obtain an authentication token on behalf of the given user and add it to
the credentials for the given map reduce job.
|
static void |
TokenUtil.obtainTokenForJob(Connection conn,
User user,
org.apache.hadoop.mapreduce.Job job)
Obtain an authentication token on behalf of the given user and add it to
the credentials for the given map reduce job.
|
Modifier and Type | Method and Description |
---|---|
static User |
VisibilityUtils.getActiveUser() |
Modifier and Type | Method and Description |
---|---|
List<String> |
DefinedSetFilterScanLabelGenerator.getLabels(User user,
Authorizations authorizations) |
List<String> |
FeedUserAuthScanLabelGenerator.getLabels(User user,
Authorizations authorizations) |
List<String> |
SimpleScanLabelGenerator.getLabels(User user,
Authorizations authorizations) |
List<String> |
ScanLabelGenerator.getLabels(User user,
Authorizations authorizations)
Helps to get a list of lables associated with an UGI
|
List<String> |
EnforcingScanLabelGenerator.getLabels(User user,
Authorizations authorizations) |
boolean |
VisibilityLabelService.havingSystemAuth(User user)
System checks for user auth during admin operations.
|
boolean |
DefaultVisibilityLabelServiceImpl.havingSystemAuth(User user) |
Modifier and Type | Method and Description |
---|---|
static boolean |
SnapshotDescriptionUtils.isSnapshotOwner(org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription snapshot,
User user)
Check if the user is this table snapshot's owner
|
Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.