访问审计日志
请注意
此功能需要Databricks高级计划.
Databricks提供了对Databricks用户执行的活动的审计日志的访问,允许您的企业监视详细的Databricks使用模式。
日志有两种类型:
带有工作区级事件的工作区级审计日志。
包含帐户级事件的帐户级审计日志。
有关每种类型的事件和相关服务的列表,请参见审计事件.
作为Databricks帐户所有者或帐户admin,您可以配置将JSON文件格式的审计日志传递到谷歌云存储(GCS)存储桶,在该存储桶中,您可以将数据用于使用情况分析.Databricks为帐户中的每个工作空间提供一个单独的JSON文件,并为帐户级事件提供一个单独的文件。
要配置审计日志的下发,必须先设置一个GCS桶,让Databricks访问该桶,然后使用账户控制台定义日志下发配置告诉Databricks将日志发送到哪里。
创建后不能编辑日志传递配置,但可以使用帐户控制台临时或永久禁用日志传递配置。您最多可以有两个当前启用的审计日志传递配置。
配置日志下发,请参见配置审计日志传递.
配置详细审计日志
除了默认值事件,您可以配置工作空间以通过启用来生成其他事件详细审计日志.
其他笔记本操作
审计日志类别中的其他操作笔记本
:
动作名称
runCommand
,当Databricks在笔记本中运行命令时发出。命令对应于笔记本中的单元格。
请求参数:
notebookId
:笔记本IDexecutionTime
:命令执行时间,单位为秒。这是一个十进制值,例如13.789
.状态
:命令的状态。可能的值为完成了
(命令结束),跳过
(该命令被跳过),取消了
(命令被取消),或者失败的
(命令执行失败)。commandId
:该命令的唯一ID。commandText
:命令文本。对于多行命令,行之间用换行符分隔。
其他Databricks SQL操作
审计日志类别中的其他操作databrickssql
:
动作名称
commandSubmit
,当一个命令被提交到Databricks SQL时运行。请求参数:
commandText
:用户指定的SQL语句或命令。warehouseId
: SQL仓库的ID。commandId
:命令ID。
动作名称
commandFinish
,该命令在命令完成或命令取消时运行。请求参数:
warehouseId
: SQL仓库的ID。commandId
:命令ID。
检查
响应
字段显示与命令结果相关的附加信息。statusCode
—HTTP响应码。这将是错误400如果它是一个一般错误。errorMessage
—错误提示。请注意
在某些情况下,对于某些长时间运行的命令
errorMessage
字段可能在失败时无法填充。结果
:此字段为空.
开启/关闭详细审计日志
作为管理员,去数据库管理控制台.
点击工作空间设置.
旁边详细审计日志,启用或禁用该特性。
启用或禁用详细日志记录时,将在类别中发出可审计事件工作空间
用行动workspaceConfKeys
.的workspaceConfKeys
请求参数为enableVerboseAuditLogs
.请求参数workspaceConfValues
是真正的
(功能启用)或假
(功能禁用)。
延迟
在配置日志传递后最多一小时,审计传递开始,您可以访问JSON文件。
审计日志交付开始后,可审计事件通常在一小时内被记录。新的JSON文件可能会覆盖每个工作区的现有文件。覆盖确保语义只发生一次,而不需要对帐户进行读取或删除访问。
启用或禁用日志下发配置可能需要一个小时才能生效。
位置
送货地点为:
gs://<桶-的名字>/<交付-路径-前缀>/workspaceId= <workspaceId>/日期= <yyyy-毫米-dd>/auditlogs_<内部-id>。json
如果省略可选下发路径前缀,则不包含该下发路径< delivery-path-prefix > /
.
未与任何单一工作空间相关联的帐户级审计事件将被交付到workspaceId = 0
分区。
有关访问这些文件并使用Databricks分析它们的详细信息,请参见审计日志分析.
模式
Databricks提供JSON格式的审计日志。审计日志记录模式如下所示。
版本
:审计日志格式的schema版本。时间戳
:动作的UTC时间戳。workspaceId
:此事件关联的工作区ID。对于应用于任何工作区的帐户级事件,可以将其设置为“0”。sourceIPAddress
:源请求IP地址。userAgent
:用于发出请求的浏览器或API客户端。sessionId
:动作的会话ID。userIdentity
:发起请求的用户信息。电子邮件
:用户邮箱地址。
名
:记录请求的服务。actionName
:操作,如登录、注销、读、写等。requestId
:请求的唯一ID。requestParams
:被审计事件使用的参数键值对。响应
:响应请求。errorMessage
:发生错误时的错误信息。结果
:请求的结果。statusCode
:表示请求是否成功的HTTP状态码。
auditLevel
:指定这是否是工作区级事件(WORKSPACE_LEVEL
)或帐户级事件(ACCOUNT_LEVEL
).accountId
:该Databricks帐户的帐户ID。
审计事件
的名
而且actionName
属性标识审计日志记录中的审计事件。命名约定遵循DatabricksREST API.
工作空间级审计日志可用于以下服务:
账户
集群
clusterPolicies
dbfs
精灵
gitCredentials
globalInitScripts
组
iamRole
instancePools
工作
mlflowExperiment
笔记本
回购
秘密
sqlAnalytics
sqlPermissions
,在启用表访问控制列表时,它会保存所有表访问的审计日志。ssh
webTerminal
工作空间
帐户级审计日志可用于以下服务:
accountBillableUsage
:帐户的计费使用权限。logDelivery
:日志下发配置。accountsManager
:在帐户控制台中执行的操作。
帐户级事件具有workspaceId
字段设置为有效的工作空间ID,如果他们引用工作空间相关的事件,如创建或删除工作空间。如果它们没有与任何工作区相关联,则workspaceId
字段设置为0。
请注意
如果操作花费了很长时间,则请求和响应将单独记录,但请求和响应对具有相同的记录
requestId
.除mount相关操作外,Databricks审计日志中不包含dbfs相关操作。
自动操作,如由于自动缩放而调整集群大小或由于调度而启动作业,由用户执行
系统用户
.
请求参数
字段中的请求参数requestParams
对于每个受支持的服务和操作,将在以下部分中列出,并按工作空间级事件和帐户级事件进行分组。
的requestParams
字段会被截断。如果其JSON表示的大小超过100 KB,值将被截断,字符串将被截断...截断
附加到截断的项。在极少数情况下,截断的映射仍然大于100 KB,单个截断
键的值为空。
工作空间级审计日志事件
服务 |
行动 |
请求参数 |
---|---|---|
账户 |
添加 |
[" targetUserName ", " endpoint ", " targetUserId "] |
addPrincipalToGroup |
[" targetGroupId ", " endpoint ", " targetUserId ", " targetGroupName ", " targetUserName "] |
|
changePassword |
[" newPasswordSource ", " targetUserId ", " serviceSource ", " wasPasswordChanged ", " userId "] |
|
createGroup |
[" endpoint ", " targetGroupId ", " targetGroupName "] |
|
删除 |
[" targetUserId ", " targetUserName ", " endpoint "] |
|
garbageCollectDbToken |
[" tokenExpirationTime”、“标识”) |
|
generateDbToken |
(“标识”、“tokenExpirationTime”) |
|
jwtLogin |
(“用户”) |
|
登录 |
(“用户”) |
|
注销 |
(“用户”) |
|
removeAdmin |
[" targetUserName ", " endpoint ", " targetUserId "] |
|
removeGroup |
[" targetGroupId ", " targetGroupName ", " endpoint "] |
|
resetPassword |
[" serviceSource ", " userId ", " endpoint ", " targetUserId ", " targetUserName ", " wasPasswordChanged ", " newPasswordSource "] |
|
revokeDbToken |
["标识"] |
|
samlLogin |
(“用户”) |
|
setAdmin |
[" endpoint ", " targetUserName ", " targetUserId "] |
|
tokenLogin |
[" tokenId”、“用户”) |
|
validateEmail |
[" endpoint ", " targetUserName ", " targetUserId "] |
|
集群 |
changeClusterAcl |
[" shardName ", " aclPermissionSet ", " targetUserId ", " resourceId "] |
创建 |
[" cluster_log_conf "、" num_workers "、" enable_elastic_disk "、" driver_node_type_id "、" start_cluster "、" docker_image "、" ssh_public_keys "、" aws_attributes "、" acl_path_prefix "、" node_type_id "、" instance_pool_id "、" spark_env_vars "、" init_scripts "、" spark_version "、" cluster_source "、" autotermination_minutes "、" cluster_name "、" autoscale "、" custom_tags "、" cluster_creator "、" enable_local_disk_encryption "、" idempotency_token "、" spark_conf "、" organization_id "、" no_driver_daemon "、" user_id "] |
|
createResult |
[" clusterName ", " clusterState ", " clusterId ", " clusterWorkers ", " clusterOwnerUserId "] |
|
删除 |
[" cluster_id "] |
|
deleteResult |
[" clusterWorkers ", " clusterState ", " clusterId ", " clusterOwnerUserId ", " clusterName "] |
|
编辑 |
[" spark_env_vars "、" no_driver_daemon "、" enable_elastic_disk "、" aws_attributes "、" driver_node_type_id "、" custom_tags "、" cluster_name "、" spark_conf "、" ssh_public_keys "、" autotermination_minutes "、" cluster_source "、" docker_image "、" enable_local_disk_encryption "、" cluster_id "、" spark_version "、" autoscale "、" cluster_log_conf "、" instance_pool_id "、" num_workers "、" init_scripts "、" node_type_id "] |
|
permanentDelete |
[" cluster_id "] |
|
调整 |
[" cluster_id ", " num_workers ", " autoscale "] |
|
resizeResult |
[" clusterWorkers ", " clusterState ", " clusterId ", " clusterOwnerUserId ", " clusterName "] |
|
重新启动 |
[" cluster_id "] |
|
restartResult |
[" clusterId ", " clusterState ", " clusterName ", " clusterOwnerUserId ", " clusterWorkers "] |
|
开始 |
[" init_scripts_safe_mode”、“cluster_id”) |
|
startResult |
[" clusterName ", " clusterState ", " clusterWorkers ", " clusterOwnerUserId ", " clusterId "] |
|
clusterPolicies |
创建 |
["名称") |
编辑 |
[" policy_id”、“名称”) |
|
删除 |
[" policy_id "] |
|
changeClusterPolicyAcl |
[" shardName ", " targetUserId ", " resourceId ", " aclPermissionSet "] |
|
dbfs |
addBlock |
(“处理”、“data_length”) |
创建 |
[" path ", " bufferSize ", " overwrite "] |
|
删除 |
(“递归”、“路径”) |
|
getSessionCredentials |
(“挂载点”) |
|
mkdir |
(“路径”) |
|
山 |
(“挂载点”、“所有者”) |
|
移动 |
[" dst ", " source_path ", " src ", " destination_path "] |
|
把 |
(“路径”,“覆盖”) |
|
卸载 |
(“挂载点”) |
|
databrickssql |
addDashboardWidget |
[" dashboardId”、“widgetId”) |
cancelQueryExecution |
[" queryExecutionId "] |
|
changeWarehouseAcls |
[" aclpermission ", " resourceId ", " shardName ", " targetUserId "] |
|
changePermissions |
[" granteeAndPermission ", " objectId ", " objectType "] |
|
cloneDashboard |
[" dashboardId "] |
|
commandSubmit(只详细审计日志) |
[" orgId ", " sourceIpAddress ", " timestamp ", " userAgent ", " userIdentity ", " shardName "(见细节)] |
|
commandFinish(仅用于详细审计日志) |
[" orgId ", " sourceIpAddress ", " timestamp ", " userAgent ", " userIdentity ", " shardName "(见细节)] |
|
createAlertDestination |
[" alertDestinationId”、“alertDestinationType”) |
|
createDashboard |
[" dashboardId "] |
|
createDataPreviewDashboard |
[" dashboardId "] |
|
createWarehouse |
[" auto_resume ", " auto_stop_mins ", " channel ", " cluster_size ", " conf_pairs ", " custom_cluster_confs ", " enable_databricks_compute ", " enable_photon ", " enable_serverless_compute ", " instance_profile_arn ", " max_num_clusters ", " min_num_clusters ", " name ", " size ", " spot_instance_policy ", " tags ", " test_overrides "] |
|
createQuery |
[" queryId "] |
|
createQueryDraft |
[" queryId "] |
|
createQuerySnippet |
[" querySnippetId "] |
|
createRefreshSchedule |
[" alertId ", " dashboardId ", " refreshScheduleId "] |
|
createSampleDashboard |
[" sampleDashboardId "] |
|
createSubscription |
[" dashboardId ", " refreshScheduleId ", " subscriptionId "] |
|
createVisualization |
[" queryId”、“visualizationId”) |
|
deleteAlert |
[" alertId "] |
|
deleteAlertDestination |
[" alertDestinationId "] |
|
deleteDashboard |
[" dashboardId "] |
|
deleteDashboardWidget |
[" widgetId "] |
|
deleteWarehouse |
[" id "] |
|
deleteExternalDatasource |
[" dataSourceId "] |
|
deleteQuery |
[" queryId "] |
|
deleteQueryDraft |
[" queryId "] |
|
deleteQuerySnippet |
[" querySnippetId "] |
|
deleteRefreshSchedule |
[" alertId ", " dashboardId ", " refreshScheduleId "] |
|
deleteSubscription |
[" subscriptionId "] |
|
deleteVisualization |
[" visualizationId "] |
|
downloadQueryResult |
[" fileType ", " queryId ", " queryResultId "] |
|
editWarehouse |
[" auto_stop_mins ", " channel ", " cluster_size ", " confs ", " enable_photon ", " enable_serverless_compute ", " id ", " instance_profile_arn ", " max_num_clusters ", " min_num_clusters ", " name ", " spot_instance_policy ", " tags "] |
|
executeAdhocQuery |
[" dataSourceId "] |
|
executeSavedQuery |
[" queryId "] |
|
executeWidgetQuery |
[" widgetId "] |
|
favoriteDashboard |
[" dashboardId "] |
|
favoriteQuery |
[" queryId "] |
|
forkQuery |
[" originalQueryId”、“queryId”) |
|
listQueries |
[" filter_by ", " include_metrics ", " max_results ", " page_token "] |
|
moveDashboardToTrash |
[" dashboardId "] |
|
moveQueryToTrash |
[" queryId "] |
|
muteAlert |
[" alertId "] |
|
publishBatch |
["状态") |
|
publishDashboardSnapshot |
[" dashboardId ", " hookId ", " subscriptionId "] |
|
restoreDashboard |
[" dashboardId "] |
|
restoreQuery |
[" queryId "] |
|
setWarehouseConfig |
[" data_access_config ", " enable_serverless_compute ", " instance_profile_arn ", " security_policy ", " serverless_agreement ", " sql_configuration_parameters ", " try_create_databricks_managed_starter_warehouse "] |
|
snapshotDashboard |
[" dashboardId "] |
|
startWarehouse |
[" id "] |
|
stopWarehouse |
[" id "] |
|
subscribeAlert |
[" alertId”、“destinationId”) |
|
transferObjectOwnership |
[" newOwner ", " objectId ", " objectType "] |
|
unfavoriteDashboard |
[" dashboardId "] |
|
unfavoriteQuery |
[" queryId "] |
|
unmuteAlert |
[" alertId "] |
|
unsubscribeAlert |
[" alertId”、“subscriberId”) |
|
updateAlert |
[" alertId”、“queryId”) |
|
updateAlertDestination |
[" alertDestinationId "] |
|
updateDashboard |
[" dashboardId "] |
|
updateDashboardWidget |
[" widgetId "] |
|
updateOrganizationSetting |
[" has_configured_data_access ", " has_explored_sql_warehousing ", " has_granted_permissions "] |
|
updateQuery |
[" queryId "] |
|
updateQueryDraft |
[" queryId "] |
|
updateQuerySnippet |
[" querySnippetId "] |
|
updateRefreshSchedule |
[" alertId ", " dashboardId ", " refreshScheduleId "] |
|
updateVisualization |
[" visualizationId "] |
|
精灵 |
databricksAccess |
[" duration ", " approver ", " reason ", " authType ", " user "] |
gitCredentials |
getGitCredential |
[" id "] |
listGitCredentials |
[] |
|
deleteGitCredential |
[" id "] |
|
updateGitCredential |
[" id ", " git_provider ", " git_username "] |
|
createGitCredential |
[" git_provider”、“git_username”) |
|
globalInitScripts |
创建 |
[" name ", " position ", " script-SHA256 ", " enabled "] |
更新 |
[" script_id ", " name ", " position ", " script-SHA256 ", " enabled "] |
|
删除 |
[" script_id "] |
|
组 |
addPrincipalToGroup |
[" user_name”、“parent_name”) |
createGroup |
[" group_name "] |
|
getGroupMembers |
[" group_name "] |
|
removeGroup |
[" group_name "] |
|
iamRole |
changeIamRoleAcl |
[" targetUserId ", " shardName ", " resourceId ", " aclPermissionSet "] |
instancePools |
changeInstancePoolAcl |
[" shardName ", " resourceId ", " targetUserId ", " aclPermissionSet "] |
创建 |
[" enable_elastic_disk ", " preloaded_spark_versions ", " idle_instance_autotermination_minutes ", " instance_pool_name ", " node_type_id ", " custom_tags ", " max_capacity ", " min_idle_instances ", " aws_attributes "] |
|
删除 |
[" instance_pool_id "] |
|
编辑 |
[" instance_pool_name ", " idle_instance_autotermination_minutes ", " min_idle_instances ", " preloaded_spark_versions ", " max_capacity ", " enable_elastic_disk ", " node_type_id ", " instance_pool_id ", " aws_attributes "] |
|
工作 |
取消 |
[" run_id "] |
cancelAllRuns |
[" job_id "] |
|
changeJobAcl |
[" shardName ", " aclPermissionSet ", " resourceId ", " targetUserId "] |
|
创建 |
[" spark_jar_task ", " email_notifications ", " notebook_task ", " spark_submit_task ", " timeout_seconds ", " libraries ", " name ", " spark_python_task ", " job_type ", " new_cluster ", " existing_cluster_id ", " max_retries ", " schedule "] |
|
删除 |
[" job_id "] |
|
deleteRun |
[" run_id "] |
|
重置 |
[" job_id”、“new_settings”) |
|
resetJobAcl |
(“拨款”、“job_id”) |
|
runFailed |
[" jobClusterType ", " jobTriggerType ", " jobId ", " jobTaskType ", " runId ", " jobTerminalState ", " idInJob ", " orgId "] |
|
runNow |
[" notebook_params ", " job_id ", " jar_params ", " workflow_context "] |
|
runSucceeded |
[" idInJob ", " jobId ", " jobTriggerType ", " orgId ", " runId ", " jobClusterType ", " jobTaskType ", " jobTerminalState "] |
|
submitRun |
[" shell_command_task ", " run_name ", " spark_python_task ", " existing_cluster_id ", " notebook_task ", " timeout_seconds ", " libraries ", " new_cluster ", " spark_jar_task "] |
|
更新 |
[" fields_to_remove ", " job_id ", " new_settings "] |
|
mlflowExperiment |
deleteMlflowExperiment |
[" experimentId ", " path ", " experimentName "] |
moveMlflowExperiment |
[" newPath ", " experimentId ", " oldPath "] |
|
restoreMlflowExperiment |
[" experimentId ", " path ", " experimentName "] |
|
mlflowModelRegistry |
listModelArtifacts |
[" name ", " version ", " path ", " page_token "] |
getModelVersionSignedDownloadUri |
[" name ", " version ", " path "] |
|
createRegisteredModel |
(“名字”、“标签”) |
|
deleteRegisteredModel |
["名称") |
|
renameRegisteredModel |
(“名字”,“new_name”) |
|
setRegisteredModelTag |
[" name ", " key ", " value "] |
|
deleteRegisteredModelTag |
(“名字”,“关键”) |
|
createModelVersion |
[" name ", " source ", " run_id ", " tags ", " run_link "] |
|
deleteModelVersion |
(“名字”、“版本”) |
|
getModelVersionDownloadUri |
(“名字”、“版本”) |
|
setModelVersionTag |
[" name ", " version ", " key ", " value "] |
|
deleteModelVersionTag |
[" name ", " version ", " key "] |
|
createTransitionRequest |
[" name ", " version ", " stage "] |
|
deleteTransitionRequest |
[" name ", " version ", " stage ", " creator "] |
|
approveTransitionRequest |
[" name ", " version ", " stage ", " archive_existing_versions "] |
|
rejectTransitionRequest |
[" name ", " version ", " stage "] |
|
transitionModelVersionStage |
[" name ", " version ", " stage ", " archive_existing_versions "] |
|
transitionModelVersionStageDatabricks |
[" name ", " version ", " stage ", " archive_existing_versions "] |
|
createComment |
(“名字”、“版本”) |
|
updateComment |
[" id "] |
|
deleteComment |
[" id "] |
|
笔记本 |
attachNotebook |
[" path ", " clusterId ", " notebookId "] |
createNotebook |
[" notebookId”、“路径”) |
|
deleteFolder |
(“路径”) |
|
deleteNotebook |
[" notebookkid ", " notebookName ", " path "] |
|
detachNotebook |
[" notebookId ", " clusterId ", " path "] |
|
downloadLargeResults |
[" notebookId”、“notebookFullPath”) |
|
downloadPreviewResults |
[" notebookId”、“notebookFullPath”) |
|
importNotebook |
(“路径”) |
|
moveNotebook |
[" newPath ", " oldPath ", " notebookId "] |
|
renameNotebook |
[" newName ", " oldName ", " parentPath ", " notebookId "] |
|
restoreFolder |
(“路径”) |
|
restoreNotebook |
[" path ", " notebookkid ", " notebookName "] |
|
runCommand(仅用于详细审计日志) |
[" notebookId ", " executionTime ", " status ", " commandId ", " commandText "(参见细节)] |
|
takeNotebookSnapshot |
(“路径”) |
|
回购 |
createRepo |
[" url ", " provider ", " path "] |
updateRepo |
[" id ",“分支”,“标签”,“git_url”,“git_provider”) |
|
getRepo |
[" id "] |
|
listRepos |
[" path_prefix”、“next_page_token”) |
|
deleteRepo |
[" id "] |
|
拉 |
[" id "] |
|
commitAndPush |
[" id ", " message ", " files ", " checkSensitiveToken "] |
|
checkoutBranch |
[" id ",“分支”] |
|
丢弃 |
[" id ",“file_paths”] |
|
秘密 |
createScope |
["范围"] |
deleteScope |
["范围"] |
|
deleteSecret |
(“关键”、“范围”) |
|
getSecret |
(“范围”、“关键”) |
|
listAcls |
["范围"] |
|
listSecrets |
["范围"] |
|
putSecret |
[" string_value ", " scope ", " key "] |
|
sqlanalytics |
createEndpoint |
|
startEndpoint |
||
stopEndpoint |
||
deleteEndpoint |
||
editEndpoint |
||
changeEndpointAcls |
||
setEndpointConfig |
||
createQuery |
[" queryId "] |
|
updateQuery |
[" queryId "] |
|
forkQuery |
[" queryId”、“originalQueryId”) |
|
moveQueryToTrash |
[" queryId "] |
|
deleteQuery |
[" queryId "] |
|
restoreQuery |
[" queryId "] |
|
createDashboard |
[" dashboardId "] |
|
updateDashboard |
[" dashboardId "] |
|
moveDashboardToTrash |
[" dashboardId "] |
|
deleteDashboard |
[" dashboardId "] |
|
restoreDashboard |
[" dashboardId "] |
|
createAlert |
[" alertId”、“queryId”) |
|
updateAlert |
[" alertId”、“queryId”) |
|
deleteAlert |
[" alertId "] |
|
createVisualization |
[" visualizationId”、“queryId”) |
|
updateVisualization |
[" visualizationId "] |
|
deleteVisualization |
[" visualizationId "] |
|
changePermissions |
[" objectType ", " objectId ", " granteeAndPermission "] |
|
createAlertDestination |
[" alertDestinationId”、“alertDestinationType”) |
|
updateAlertDestination |
[" alertDestinationId "] |
|
deleteAlertDestination |
[" alertDestinationId "] |
|
createQuerySnippet |
[" querySnippetId "] |
|
updateQuerySnippet |
[" querySnippetId "] |
|
deleteQuerySnippet |
[" querySnippetId "] |
|
downloadQueryResult |
[" queryId, " queryResultId ", " fileType "] |
|
sqlPermissions |
createSecurable |
(“可获得的”) |
grantPermission |
(“许可”) |
|
removeAllPermissions |
(“可获得的”) |
|
requestPermissions |
["请求"] |
|
revokePermission |
(“许可”) |
|
showPermissions |
["可到手的”、“主要”) |
|
ssh |
登录 |
[" containerId ", " userName ", " port ", " publicKey ", " instanceId "] |
注销 |
[" userName ", " containerId ", " instanceId "] |
|
webTerminal |
startSession |
[" socketGUID ", " clusterId ", " serverPort ", " ProxyTargetURI "] |
closeSession |
[" socketGUID ", " clusterId ", " serverPort ", " ProxyTargetURI "] |
|
工作空间 |
changeWorkspaceAcl |
[" shardName ", " targetUserId ", " aclPermissionSet ", " resourceId "] |
fileCreate |
(“路径”) |
|
fileDelete |
(“路径”) |
|
moveWorkspaceNode |
[" destinationPath”、“路径”) |
|
purgeWorkspaceNodes |
[" treestoreId "] |
|
workspaceConfEdit |
[" workspaceConfKeys(值:enableResultsDownloading, enableExportNotebook) ", " workspaceConfValues "] |
|
workspaceExport |
[" workspaceExportFormat”、“notebookFullPath”) |
帐户级审计日志事件
服务 |
行动 |
请求参数 |
---|---|---|
accountBillableUsage |
getAggregatedUsage |
[" account_id ", " window_size ", " start_time ", " end_time ", " meter_name ", " workspace_ids_filter "] |
getDetailedUsage |
[" account_id”、“start_month”、“end_month”、“with_pii”) |
|
账户 |
登录 |
(“用户”) |
gcpWorkspaceBrowserLogin |
(“用户”) |
|
注销 |
(“用户”) |
|
accountsManager |
updateAccount |
[" account_id”、“账户”) |
changeAccountOwner |
[" account_id”、“first_name”、“last_name”,“电子邮件”) |
|
updateSubscription |
[" account_id ", " subscription_id ", " subscription "] |
|
listSubscriptions |
[" account_id "] |
|
createWorkspaceConfiguration |
(“工作区”) |
|
getWorkspaceConfiguration |
[" account_id”、“workspace_id”) |
|
listWorkspaceConfigurations |
[" account_id "] |
|
updateWorkspaceConfiguration |
[" account_id”、“workspace_id”) |
|
deleteWorkspaceConfiguration |
[" account_id”、“workspace_id”) |
|
listWorkspaceEncryptionKeyRecords |
[" account_id”、“workspace_id”) |
|
listWorkspaceEncryptionKeyRecordsForAccount |
[" account_id "] |
|
createVpcEndpoint |
[" vpc_endpoint "] |
|
getVpcEndpoint |
[" account_id”、“vpc_endpoint_id”) |
|
listVpcEndpoints |
[" account_id "] |
|
deleteVpcEndpoint |
[" account_id”、“vpc_endpoint_id”) |
|
createPrivateAccessSettings |
[" private_access_settings "] |
|
getPrivateAccessSettings |
[" account_id”、“private_access_settings_id”) |
|
listPrivateAccessSettingss |
[" account_id "] |
|
deletePrivateAccessSettings |
[" account_id”、“private_access_settings_id”) |
|
logDelivery |
createLogDeliveryConfiguration |
[" account_id”、“config_id”) |
updateLogDeliveryConfiguration |
[" config_id ", " account_id ", " status "] |
|
getLogDeliveryConfiguration |
[" log_delivery_configuration "] |
|
listLogDeliveryConfigurations |
[" account_id ", " storage_configuration_id ", " credentials_id ", " status "] |
|
ssoConfigBackend |
创建 |
[" account_id ", " sso_type ", " config "] |
更新 |
[" account_id ", " sso_type ", " config "] |
|
得到 |
[" account_id”、“sso_type”) |
审计日志分析
通过“Databricks”分析审计日志。下面以日志报告Databricks访问和Apache Spark版本为例。
将审计日志加载为DataFrame,并将DataFrame注册为临时表。
瓦尔df=火花.读.格式(“json”).负载(“gs: / / bucketName /道路/ /你/审计日志”)df.createOrReplaceTempView(“audit_logs”)
列出访问Databricks的用户及其访问位置。
%sql选择截然不同的userIdentity.电子邮件,sourceIPAddress从audit_logs在哪里名=“账户”和actionName就像登录“% %”
检查使用的Apache Spark版本。
%sql选择requestParams.spark_version,数(*)从audit_logs在哪里名=“集群”和actionName=“创造”集团通过requestParams.spark_version
检查表数据访问。
%sql选择*从audit_logs在哪里名=“sqlPermissions”和actionName=“requestPermissions”