Modifier and Type | Method and Description |
---|---|
JVMClusterUtil.MasterThread |
LocalHBaseCluster.addMaster(org.apache.hadoop.conf.Configuration c,
int index,
User user) |
JVMClusterUtil.RegionServerThread |
LocalHBaseCluster.addRegionServer(org.apache.hadoop.conf.Configuration config,
int index,
User user) |
HTableDescriptor |
HTableDescriptor.setOwner(User owner)
Deprecated.
since 0.94.1
|
Modifier and Type | Field and Description |
---|---|
protected User |
ConnectionImplementation.user |
private User |
AsyncConnectionImpl.user |
Modifier and Type | Method and Description |
---|---|
static CompletableFuture<AsyncConnection> |
ConnectionFactory.createAsyncConnection(org.apache.hadoop.conf.Configuration conf,
User user)
Create a new AsyncConnection instance using the passed
conf and user . |
static Connection |
ConnectionFactory.createConnection(org.apache.hadoop.conf.Configuration conf,
ExecutorService pool,
User user)
Create a new Connection instance using the passed
conf instance. |
static Connection |
ConnectionFactory.createConnection(org.apache.hadoop.conf.Configuration conf,
User user)
Create a new Connection instance using the passed
conf instance. |
static ClusterConnection |
ConnectionUtils.createShortCircuitConnection(org.apache.hadoop.conf.Configuration conf,
ExecutorService pool,
User user,
ServerName serverName,
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService.BlockingInterface admin,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.BlockingInterface client)
Creates a short-circuit connection that can bypass the RPC layer (serialization,
deserialization, networking, etc..) when talking to a local server.
|
TableDescriptorBuilder |
TableDescriptorBuilder.setOwner(User owner)
Deprecated.
since 2.0.0 and will be removed in 3.0.0.
|
TableDescriptorBuilder.ModifyableTableDescriptor |
TableDescriptorBuilder.ModifyableTableDescriptor.setOwner(User owner)
Deprecated.
since 2.0.0 and will be removed in 3.0.0.
|
Constructor and Description |
---|
AsyncConnectionImpl(org.apache.hadoop.conf.Configuration conf,
AsyncRegistry registry,
String clusterId,
User user) |
ConnectionImplementation(org.apache.hadoop.conf.Configuration conf,
ExecutorService pool,
User user)
constructor
|
MasterlessConnection(org.apache.hadoop.conf.Configuration conf,
ExecutorService pool,
User user) |
ShortCircuitingClusterConnection(org.apache.hadoop.conf.Configuration conf,
ExecutorService pool,
User user,
ServerName serverName,
org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.AdminService.BlockingInterface admin,
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.ClientService.BlockingInterface client) |
Modifier and Type | Field and Description |
---|---|
private User |
ObserverContextImpl.caller |
private User |
Export.PrivilegedWriter.user |
Modifier and Type | Method and Description |
---|---|
private static User |
Export.SecureWriter.getActiveUser(UserProvider userProvider,
org.apache.hadoop.security.token.Token userToken) |
Modifier and Type | Method and Description |
---|---|
Optional<User> |
ObserverContext.getCaller()
Returns the active user for the coprocessor call.
|
Optional<User> |
ObserverContextImpl.getCaller() |
Constructor and Description |
---|
ObserverContextImpl(User caller) |
ObserverContextImpl(User caller,
boolean bypassable) |
ObserverOperation(CoprocessorHost.ObserverGetter<C,O> observerGetter,
User user) |
ObserverOperation(CoprocessorHost.ObserverGetter<C,O> observerGetter,
User user,
boolean bypassable) |
ObserverOperationWithoutResult(CoprocessorHost.ObserverGetter<C,O> observerGetter,
User user) |
ObserverOperationWithoutResult(CoprocessorHost.ObserverGetter<C,O> observerGetter,
User user,
boolean bypassable) |
ObserverOperationWithResult(CoprocessorHost.ObserverGetter<C,O> observerGetter,
R result,
User user) |
ObserverOperationWithResult(CoprocessorHost.ObserverGetter<C,O> observerGetter,
R result,
User user,
boolean bypassable) |
PrivilegedWriter(User user,
org.apache.hadoop.io.SequenceFile.Writer out) |
Modifier and Type | Field and Description |
---|---|
(package private) User |
ConnectionId.ticket |
protected User |
AbstractRpcClient.AbstractRpcChannel.ticket |
protected User |
ServerRpcConnection.user |
protected User |
ServerCall.user |
Modifier and Type | Method and Description |
---|---|
User |
ConnectionId.getTicket() |
Modifier and Type | Method and Description |
---|---|
Optional<User> |
RpcCallContext.getRequestUser()
Returns the user credentials associated with the current RPC request or not present if no
credentials were provided.
|
Optional<User> |
ServerCall.getRequestUser() |
static Optional<User> |
RpcServer.getRequestUser()
Returns the user credentials associated with the current RPC request or not present if no
credentials were provided.
|
Modifier and Type | Method and Description |
---|---|
private org.apache.hbase.thirdparty.com.google.protobuf.Message |
AbstractRpcClient.callBlockingMethod(org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
HBaseRpcController hrc,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
org.apache.hbase.thirdparty.com.google.protobuf.Message returnType,
User ticket,
InetSocketAddress isa)
Make a blocking call.
|
private void |
AbstractRpcClient.callMethod(org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor md,
HBaseRpcController hrc,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
org.apache.hbase.thirdparty.com.google.protobuf.Message returnType,
User ticket,
InetSocketAddress addr,
org.apache.hbase.thirdparty.com.google.protobuf.RpcCallback<org.apache.hbase.thirdparty.com.google.protobuf.Message> callback) |
org.apache.hbase.thirdparty.com.google.protobuf.BlockingRpcChannel |
RpcClient.createBlockingRpcChannel(ServerName sn,
User user,
int rpcTimeout)
Creates a "channel" that can be used by a blocking protobuf service.
|
org.apache.hbase.thirdparty.com.google.protobuf.BlockingRpcChannel |
AbstractRpcClient.createBlockingRpcChannel(ServerName sn,
User ticket,
int rpcTimeout) |
org.apache.hbase.thirdparty.com.google.protobuf.RpcChannel |
RpcClient.createRpcChannel(ServerName sn,
User user,
int rpcTimeout)
Creates a "channel" that can be used by a protobuf service.
|
org.apache.hbase.thirdparty.com.google.protobuf.RpcChannel |
AbstractRpcClient.createRpcChannel(ServerName sn,
User user,
int rpcTimeout) |
int |
PriorityFunction.getPriority(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader header,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
User user)
Returns the 'priority type' of the specified request.
|
static int |
ConnectionId.hashCode(User ticket,
String serviceName,
InetSocketAddress address) |
Constructor and Description |
---|
AbstractRpcChannel(AbstractRpcClient<?> rpcClient,
InetSocketAddress addr,
User ticket,
int rpcTimeout) |
BlockingRpcChannelImplementation(AbstractRpcClient<?> rpcClient,
InetSocketAddress addr,
User ticket,
int rpcTimeout) |
ConnectionId(User ticket,
String serviceName,
InetSocketAddress address) |
RpcChannelImplementation(AbstractRpcClient<?> rpcClient,
InetSocketAddress addr,
User ticket,
int rpcTimeout) |
Modifier and Type | Method and Description |
---|---|
int |
MasterAnnotationReadingPriorityFunction.getPriority(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader header,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
User user) |
void |
MasterCoprocessorHost.postCompletedCreateTableAction(TableDescriptor htd,
RegionInfo[] regions,
User user) |
void |
MasterCoprocessorHost.postCompletedDeleteTableAction(TableName tableName,
User user) |
void |
MasterCoprocessorHost.postCompletedDisableTableAction(TableName tableName,
User user) |
void |
MasterCoprocessorHost.postCompletedEnableTableAction(TableName tableName,
User user) |
void |
MasterCoprocessorHost.postCompletedMergeRegionsAction(RegionInfo[] regionsToMerge,
RegionInfo mergedRegion,
User user)
Invoked after completing merge regions operation
|
void |
MasterCoprocessorHost.postCompletedModifyTableAction(TableName tableName,
TableDescriptor oldDescriptor,
TableDescriptor currentDescriptor,
User user) |
void |
MasterCoprocessorHost.postCompletedSplitRegionAction(RegionInfo regionInfoA,
RegionInfo regionInfoB,
User user)
Invoked just after a split
|
void |
MasterCoprocessorHost.postCompletedTruncateTableAction(TableName tableName,
User user) |
void |
MasterCoprocessorHost.postMergeRegionsCommit(RegionInfo[] regionsToMerge,
RegionInfo mergedRegion,
User user)
Invoked after merge regions operation writes the new region to hbase:meta
|
void |
MasterCoprocessorHost.postRollBackMergeRegionsAction(RegionInfo[] regionsToMerge,
User user)
Invoked after rollback merge regions operation
|
void |
MasterCoprocessorHost.postRollBackSplitRegionAction(User user)
Invoked just after the rollback of a failed split
|
void |
MasterCoprocessorHost.preCreateTableAction(TableDescriptor htd,
RegionInfo[] regions,
User user) |
void |
MasterCoprocessorHost.preDeleteTableAction(TableName tableName,
User user) |
void |
MasterCoprocessorHost.preDisableTableAction(TableName tableName,
User user) |
void |
MasterCoprocessorHost.preEnableTableAction(TableName tableName,
User user) |
void |
MasterCoprocessorHost.preMergeRegionsAction(RegionInfo[] regionsToMerge,
User user)
Invoked just before a merge
|
void |
MasterCoprocessorHost.preMergeRegionsCommit(RegionInfo[] regionsToMerge,
List<Mutation> metaEntries,
User user)
Invoked before merge regions operation writes the new region to hbase:meta
|
void |
MasterCoprocessorHost.preModifyTableAction(TableName tableName,
TableDescriptor currentDescriptor,
TableDescriptor newDescriptor,
User user) |
void |
MasterCoprocessorHost.preSplitAfterMETAAction(User user)
This will be called after update META step as part of split table region procedure.
|
void |
MasterCoprocessorHost.preSplitBeforeMETAAction(byte[] splitKey,
List<Mutation> metaEntries,
User user)
This will be called before update META step as part of split table region procedure.
|
void |
MasterCoprocessorHost.preSplitRegionAction(TableName tableName,
byte[] splitRow,
User user)
Invoked just before a split
|
void |
MasterCoprocessorHost.preTruncateTableAction(TableName tableName,
User user) |
Constructor and Description |
---|
MasterObserverOperation(User user) |
MasterObserverOperation(User user,
boolean bypassable) |
Modifier and Type | Field and Description |
---|---|
private User |
AbstractStateMachineTableProcedure.user |
Modifier and Type | Method and Description |
---|---|
User |
MasterProcedureEnv.getRequestUser() |
protected User |
AbstractStateMachineTableProcedure.getUser() |
static User |
MasterProcedureUtil.toUserInfo(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.UserInformation userInfoProto) |
Modifier and Type | Method and Description |
---|---|
protected void |
AbstractStateMachineTableProcedure.setUser(User user) |
static org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.UserInformation |
MasterProcedureUtil.toProtoUserInfo(User user) |
Modifier and Type | Method and Description |
---|---|
void |
SnapshotManager.checkPermissions(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ProcedureDescription desc,
AccessChecker accessChecker,
User user) |
Modifier and Type | Method and Description |
---|---|
List<org.apache.hadoop.fs.Path> |
DefaultMobStoreCompactor.compact(CompactionRequestImpl request,
ThroughputController throughputController,
User user) |
Modifier and Type | Method and Description |
---|---|
abstract void |
MasterProcedureManager.checkPermissions(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ProcedureDescription desc,
AccessChecker accessChecker,
User user)
Check for required permissions before executing the procedure.
|
Modifier and Type | Method and Description |
---|---|
void |
MasterFlushTableProcedureManager.checkPermissions(org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ProcedureDescription desc,
AccessChecker accessChecker,
User user) |
Modifier and Type | Method and Description |
---|---|
boolean |
ProcedureExecutor.isProcedureOwner(long procId,
User user)
Check if the user is this procedure's owner
|
void |
ProcedureExecutor.setFailureResultForNonce(NonceKey nonceKey,
String procName,
User procOwner,
IOException exception)
If the failure failed before submitting it, we may want to give back the
same error to the requests with the same nonceKey.
|
void |
Procedure.setOwner(User owner) |
Constructor and Description |
---|
FailedProcedure(long procId,
String procName,
User owner,
NonceKey nonceKey,
IOException exception) |
Modifier and Type | Field and Description |
---|---|
private User |
SplitRequest.user |
private User |
CompactSplit.CompactionRunner.user |
Modifier and Type | Method and Description |
---|---|
private User |
SecureBulkLoadManager.getActiveUser() |
Modifier and Type | Method and Description |
---|---|
boolean |
HRegion.compact(CompactionContext compaction,
HStore store,
ThroughputController throughputController,
User user) |
List<HStoreFile> |
HStore.compact(CompactionContext compaction,
ThroughputController throughputController,
User user)
Compact the StoreFiles.
|
List<org.apache.hadoop.fs.Path> |
StripeStoreEngine.StripeCompaction.compact(ThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
DefaultStoreEngine.DefaultCompactionContext.compact(ThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
DateTieredStoreEngine.DateTieredCompactionContext.compact(ThroughputController throughputController,
User user) |
private org.apache.hadoop.fs.Path |
SecureBulkLoadManager.createStagingDir(org.apache.hadoop.fs.Path baseDir,
User user,
String randomDir) |
private org.apache.hadoop.fs.Path |
SecureBulkLoadManager.createStagingDir(org.apache.hadoop.fs.Path baseDir,
User user,
TableName tableName) |
protected List<HStoreFile> |
HStore.doCompaction(CompactionRequestImpl cr,
Collection<HStoreFile> filesToCompact,
User user,
long compactionStartTime,
List<org.apache.hadoop.fs.Path> newFiles) |
private void |
CompactSplit.CompactionRunner.doCompaction(User user) |
int |
RSRpcServices.getPriority(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader header,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
User user) |
int |
AnnotationReadingPriorityFunction.getPriority(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader header,
org.apache.hbase.thirdparty.com.google.protobuf.Message param,
User user)
Returns a 'priority' based on the request type.
|
private List<HStoreFile> |
HStore.moveCompactedFilesIntoPlace(CompactionRequestImpl cr,
List<org.apache.hadoop.fs.Path> newFiles,
User user) |
void |
RegionCoprocessorHost.postCompact(HStore store,
HStoreFile resultFile,
CompactionLifeCycleTracker tracker,
CompactionRequest request,
User user)
Called after the store compaction has completed.
|
void |
RegionCoprocessorHost.postCompactSelection(HStore store,
List<HStoreFile> selected,
CompactionLifeCycleTracker tracker,
CompactionRequest request,
User user)
Called after the
HStoreFile s to be compacted have been selected from the available
candidates. |
void |
RegionCoprocessorHost.preCleanupBulkLoad(User user) |
InternalScanner |
RegionCoprocessorHost.preCompact(HStore store,
InternalScanner scanner,
ScanType scanType,
CompactionLifeCycleTracker tracker,
CompactionRequest request,
User user)
Called prior to rewriting the store files selected for compaction
|
ScanInfo |
RegionCoprocessorHost.preCompactScannerOpen(HStore store,
ScanType scanType,
CompactionLifeCycleTracker tracker,
CompactionRequest request,
User user)
Called prior to opening store scanner for compaction.
|
boolean |
RegionCoprocessorHost.preCompactSelection(HStore store,
List<HStoreFile> candidates,
CompactionLifeCycleTracker tracker,
User user)
Called prior to selecting the
HStoreFile s for compaction from the list of currently
available candidates. |
void |
RegionCoprocessorHost.prePrepareBulkLoad(User user) |
void |
RegionServerCoprocessorHost.preStop(String message,
User user) |
void |
CompactSplit.requestCompaction(HRegion region,
HStore store,
String why,
int priority,
CompactionLifeCycleTracker tracker,
User user) |
void |
CompactSplit.requestCompaction(HRegion region,
String why,
int priority,
CompactionLifeCycleTracker tracker,
User user) |
Optional<CompactionContext> |
HStore.requestCompaction(int priority,
CompactionLifeCycleTracker tracker,
User user) |
private void |
CompactSplit.requestCompactionInternal(HRegion region,
HStore store,
String why,
int priority,
boolean selectNow,
CompactionLifeCycleTracker tracker,
CompactSplit.CompactionCompleteTracker completeTracker,
User user) |
private void |
CompactSplit.requestCompactionInternal(HRegion region,
String why,
int priority,
boolean selectNow,
CompactionLifeCycleTracker tracker,
CompactSplit.CompactionCompleteTracker completeTracker,
User user) |
void |
CompactSplit.requestSplit(Region r,
byte[] midKey,
User user) |
private Optional<CompactionContext> |
CompactSplit.selectCompaction(HRegion region,
HStore store,
int priority,
CompactionLifeCycleTracker tracker,
CompactSplit.CompactionCompleteTracker completeTracker,
User user) |
void |
HRegionServer.stop(String msg,
boolean force,
User user)
Stops the regionserver.
|
Constructor and Description |
---|
BulkLoadObserverOperation(User user) |
CompactionRunner(HStore store,
HRegion region,
CompactionContext compaction,
CompactionLifeCycleTracker tracker,
CompactSplit.CompactionCompleteTracker completeTracker,
ThreadPoolExecutor parent,
User user) |
RegionObserverOperationWithoutResult(User user) |
RegionObserverOperationWithoutResult(User user,
boolean bypassable) |
RegionServerObserverOperation(User user) |
SplitRequest(Region region,
byte[] midKey,
HRegionServer hrs,
User user) |
Modifier and Type | Method and Description |
---|---|
protected List<org.apache.hadoop.fs.Path> |
Compactor.compact(CompactionRequestImpl request,
Compactor.InternalScannerFactory scannerFactory,
Compactor.CellSinkFactory<T> sinkFactory,
ThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
StripeCompactor.compact(CompactionRequestImpl request,
int targetCount,
long targetSize,
byte[] left,
byte[] right,
byte[] majorRangeFromRow,
byte[] majorRangeToRow,
ThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
StripeCompactor.compact(CompactionRequestImpl request,
List<byte[]> targetBoundaries,
byte[] majorRangeFromRow,
byte[] majorRangeToRow,
ThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
DateTieredCompactor.compact(CompactionRequestImpl request,
List<Long> lowerBoundaries,
ThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
DefaultCompactor.compact(CompactionRequestImpl request,
ThroughputController throughputController,
User user)
Do a minor/major compaction on an explicit set of storefiles from a Store.
|
abstract List<org.apache.hadoop.fs.Path> |
CompactionContext.compact(ThroughputController throughputController,
User user) |
abstract List<org.apache.hadoop.fs.Path> |
StripeCompactionPolicy.StripeCompactionRequest.execute(StripeCompactor compactor,
ThroughputController throughputController,
User user)
Executes the request against compactor (essentially, just calls correct overload of
compact method), to simulate more dynamic dispatch.
|
List<org.apache.hadoop.fs.Path> |
StripeCompactionPolicy.BoundaryStripeCompactionRequest.execute(StripeCompactor compactor,
ThroughputController throughputController,
User user) |
List<org.apache.hadoop.fs.Path> |
StripeCompactionPolicy.SplitStripeCompactionRequest.execute(StripeCompactor compactor,
ThroughputController throughputController,
User user) |
private InternalScanner |
Compactor.postCompactScannerOpen(CompactionRequestImpl request,
ScanType scanType,
InternalScanner scanner,
User user)
Calls coprocessor, if any, to create scanners - after normal scanner creation.
|
private ScanInfo |
Compactor.preCompactScannerOpen(CompactionRequestImpl request,
ScanType scanType,
User user) |
void |
CompactionRequester.requestCompaction(HRegion region,
HStore store,
String why,
int priority,
CompactionLifeCycleTracker tracker,
User user)
Request compaction on the given store.
|
void |
CompactionRequester.requestCompaction(HRegion region,
String why,
int priority,
CompactionLifeCycleTracker tracker,
User user)
Request compaction on all the stores of the given region.
|
Modifier and Type | Method and Description |
---|---|
private org.apache.hadoop.fs.Path |
HFileReplicator.createStagingDir(org.apache.hadoop.fs.Path baseDir,
User user,
String randomDir) |
private org.apache.hadoop.fs.Path |
HFileReplicator.createStagingDir(org.apache.hadoop.fs.Path baseDir,
User user,
TableName tableName) |
Modifier and Type | Method and Description |
---|---|
private User |
RSGroupAdminEndpoint.getActiveUser()
Returns the active user to which authorization checks should be applied.
|
Modifier and Type | Class and Description |
---|---|
static class |
User.SecureHadoopUser
Bridges
User invocations to underlying calls to
UserGroupInformation for secure Hadoop
0.20 and versions 0.21 and above. |
Modifier and Type | Field and Description |
---|---|
private static User |
Superusers.systemUser |
Modifier and Type | Method and Description |
---|---|
User |
UserProvider.create(org.apache.hadoop.security.UserGroupInformation ugi)
Wraps an underlying
UserGroupInformation instance. |
static User |
User.create(org.apache.hadoop.security.UserGroupInformation ugi)
Wraps an underlying
UserGroupInformation instance. |
static User |
User.createUserForTesting(org.apache.hadoop.conf.Configuration conf,
String name,
String[] groups)
Generates a new
User instance specifically for use in test code. |
static User |
User.SecureHadoopUser.createUserForTesting(org.apache.hadoop.conf.Configuration conf,
String name,
String[] groups) |
User |
UserProvider.getCurrent() |
static User |
User.getCurrent()
Returns the
User instance within current execution context. |
static User |
Superusers.getSystemUser() |
Modifier and Type | Method and Description |
---|---|
static boolean |
Superusers.isSuperUser(User user) |
Modifier and Type | Field and Description |
---|---|
private User |
AccessControlFilter.user |
private User |
AuthResult.user |
Modifier and Type | Method and Description |
---|---|
User |
AccessController.getActiveUser(ObserverContext<?> ctx)
Returns the active user to which authorization checks should be applied.
|
User |
AuthResult.getUser() |
Modifier and Type | Method and Description |
---|---|
static AuthResult |
AuthResult.allow(String request,
String reason,
User user,
Permission.Action action,
String namespace) |
static AuthResult |
AuthResult.allow(String request,
String reason,
User user,
Permission.Action action,
TableName table,
byte[] family,
byte[] qualifier) |
static AuthResult |
AuthResult.allow(String request,
String reason,
User user,
Permission.Action action,
TableName table,
Map<byte[],? extends Collection<?>> families) |
boolean |
TableAuthManager.authorize(User user,
Permission.Action action)
Authorize a global permission based on ACLs for the given user and the
user's groups.
|
boolean |
TableAuthManager.authorize(User user,
String namespace,
Permission.Action action) |
boolean |
TableAuthManager.authorize(User user,
TableName table,
byte[] family,
byte[] qualifier,
Permission.Action action) |
boolean |
TableAuthManager.authorize(User user,
TableName table,
byte[] family,
Permission.Action action) |
boolean |
TableAuthManager.authorize(User user,
TableName table,
Cell cell,
Permission.Action action)
Authorize a user for a given KV.
|
boolean |
TableAuthManager.authorizeUser(User user,
TableName table,
byte[] family,
byte[] qualifier,
Permission.Action action) |
boolean |
TableAuthManager.authorizeUser(User user,
TableName table,
byte[] family,
Permission.Action action)
Checks authorization to a given table and column family for a user, based on the
stored user permissions.
|
private boolean |
AccessController.checkCoveringPermission(User user,
AccessController.OpType request,
RegionCoprocessorEnvironment e,
byte[] row,
Map<byte[],? extends Collection<?>> familyMap,
long opTs,
Permission.Action... actions)
Determine if cell ACLs covered by the operation grant access.
|
private void |
AccessController.checkForReservedTagPresence(User user,
Mutation m) |
void |
AccessChecker.checkLockPermissions(User user,
String namespace,
TableName tableName,
RegionInfo[] regionInfos,
String reason) |
private void |
AccessController.checkSystemOrSuperUser(User activeUser) |
static AuthResult |
AuthResult.deny(String request,
String reason,
User user,
Permission.Action action,
String namespace) |
static AuthResult |
AuthResult.deny(String request,
String reason,
User user,
Permission.Action action,
TableName table,
byte[] family,
byte[] qualifier) |
static AuthResult |
AuthResult.deny(String request,
String reason,
User user,
Permission.Action action,
TableName table,
Map<byte[],? extends Collection<?>> families) |
static List<Permission> |
AccessControlLists.getCellPermissionsForUser(User user,
Cell cell) |
boolean |
TableAuthManager.hasAccess(User user,
TableName table,
Permission.Action action) |
private boolean |
AccessController.hasFamilyQualifierPermission(User user,
Permission.Action perm,
RegionCoprocessorEnvironment env,
Map<byte[],? extends Collection<byte[]>> familyMap)
Returns
true if the current user is allowed the given action
over at least one of the column qualifiers in the given column families. |
boolean |
TableAuthManager.matchPermission(User user,
TableName table,
byte[] family,
byte[] qualifier,
Permission.Action action) |
boolean |
TableAuthManager.matchPermission(User user,
TableName table,
byte[] family,
Permission.Action action)
Returns true if the given user has a
TablePermission matching up
to the column family portion of a permission. |
private AuthResult |
AccessController.permissionGranted(AccessController.OpType opType,
User user,
RegionCoprocessorEnvironment e,
Map<byte[],? extends Collection<?>> families,
Permission.Action... actions)
Check the current user for authorization to perform a specific action
against the given set of row data.
|
private AuthResult |
AccessController.permissionGranted(String request,
User user,
Permission.Action permRequest,
RegionCoprocessorEnvironment e,
Map<byte[],? extends Collection<?>> families)
Check the current user for authorization to perform a specific action
against the given set of row data.
|
void |
AccessChecker.requireAccess(User user,
String request,
TableName tableName,
Permission.Action... permissions)
Authorizes that the current user has any of the given permissions to access the table.
|
void |
AccessChecker.requireGlobalPermission(User user,
String request,
Permission.Action perm,
String namespace)
Checks that the user has the given global permission.
|
void |
AccessChecker.requireGlobalPermission(User user,
String request,
Permission.Action perm,
TableName tableName,
Map<byte[],? extends Collection<byte[]>> familyMap)
Checks that the user has the given global permission.
|
void |
AccessChecker.requireNamespacePermission(User user,
String request,
String namespace,
Permission.Action... permissions)
Checks that the user has the given global or namespace permission.
|
void |
AccessChecker.requireNamespacePermission(User user,
String request,
String namespace,
TableName tableName,
Map<byte[],? extends Collection<byte[]>> familyMap,
Permission.Action... permissions)
Checks that the user has the given global or namespace permission.
|
void |
AccessChecker.requirePermission(User user,
String request,
Permission.Action perm)
Authorizes that the current user has global privileges for the given action.
|
void |
AccessChecker.requirePermission(User user,
String request,
TableName tableName,
byte[] family,
byte[] qualifier,
Permission.Action... permissions)
Authorizes that the current user has any of the given permissions for the
given table, column family and column qualifier.
|
void |
AccessChecker.requireTablePermission(User user,
String request,
TableName tableName,
byte[] family,
byte[] qualifier,
Permission.Action... permissions)
Authorizes that the current user has any of the given permissions for the
given table, column family and column qualifier.
|
boolean |
TableAuthManager.userHasAccess(User user,
TableName table,
Permission.Action action)
Checks if the user has access to the full table or at least a family/qualifier
for the specified action.
|
Constructor and Description |
---|
AccessControlFilter(TableAuthManager mgr,
User ugi,
TableName tableName,
AccessControlFilter.Strategy strategy,
Map<ByteRange,Integer> cfVsMaxVersions) |
AuthResult(boolean allowed,
String request,
String reason,
User user,
Permission.Action action,
String namespace) |
AuthResult(boolean allowed,
String request,
String reason,
User user,
Permission.Action action,
TableName table,
byte[] family,
byte[] qualifier) |
AuthResult(boolean allowed,
String request,
String reason,
User user,
Permission.Action action,
TableName table,
Map<byte[],? extends Collection<?>> families) |
Modifier and Type | Method and Description |
---|---|
static void |
TokenUtil.addTokenForJob(Connection conn,
org.apache.hadoop.mapred.JobConf job,
User user)
Checks for an authentication token for the given user, obtaining a new token if necessary,
and adds it to the credentials for the given map reduce job.
|
static void |
TokenUtil.addTokenForJob(Connection conn,
User user,
org.apache.hadoop.mapreduce.Job job)
Checks for an authentication token for the given user, obtaining a new token if necessary,
and adds it to the credentials for the given map reduce job.
|
static boolean |
TokenUtil.addTokenIfMissing(Connection conn,
User user)
Checks if an authentication tokens exists for the connected cluster,
obtaining one if needed and adding it to the user's credentials.
|
private static org.apache.hadoop.security.token.Token<AuthenticationTokenIdentifier> |
TokenUtil.getAuthToken(org.apache.hadoop.conf.Configuration conf,
User user)
Get the authentication token of the user for the cluster specified in the configuration
|
static void |
TokenUtil.obtainAndCacheToken(Connection conn,
User user)
Obtain an authentication token for the given user and add it to the
user's credentials.
|
static org.apache.hadoop.security.token.Token<AuthenticationTokenIdentifier> |
TokenUtil.obtainToken(Connection conn,
User user)
Obtain and return an authentication token for the current user.
|
static void |
TokenUtil.obtainTokenForJob(Connection conn,
org.apache.hadoop.mapred.JobConf job,
User user)
Obtain an authentication token on behalf of the given user and add it to
the credentials for the given map reduce job.
|
static void |
TokenUtil.obtainTokenForJob(Connection conn,
User user,
org.apache.hadoop.mapreduce.Job job)
Obtain an authentication token on behalf of the given user and add it to
the credentials for the given map reduce job.
|
Modifier and Type | Method and Description |
---|---|
static User |
VisibilityUtils.getActiveUser() |
Modifier and Type | Method and Description |
---|---|
List<String> |
DefinedSetFilterScanLabelGenerator.getLabels(User user,
Authorizations authorizations) |
List<String> |
ScanLabelGenerator.getLabels(User user,
Authorizations authorizations)
Helps to get a list of lables associated with an UGI
|
List<String> |
SimpleScanLabelGenerator.getLabels(User user,
Authorizations authorizations) |
List<String> |
FeedUserAuthScanLabelGenerator.getLabels(User user,
Authorizations authorizations) |
List<String> |
EnforcingScanLabelGenerator.getLabels(User user,
Authorizations authorizations) |
boolean |
DefaultVisibilityLabelServiceImpl.havingSystemAuth(User user) |
boolean |
VisibilityLabelService.havingSystemAuth(User user)
System checks for user auth during admin operations.
|
Modifier and Type | Method and Description |
---|---|
static boolean |
SnapshotDescriptionUtils.isSnapshotOwner(SnapshotDescription snapshot,
User user)
Check if the user is this table snapshot's owner
|
Copyright © 2007–2019 The Apache Software Foundation. All rights reserved.