其他分享
首页 > 其他分享> > CDH文件权限问题

CDH文件权限问题

作者:互联网

Hive执行语句的时候提示 /user权限不够

hive> 
    > select count(*) from fact_sale;
Query ID = root_20201119152619_16f496b5-2482-4efb-a26c-e18117b2f10c
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x

解决方案:

[root@hp1 ~]# 
[root@hp1 ~]# hadoop fs -ls /
Found 2 items
drwxrwxrwt   - hdfs supergroup          0 2020-11-15 12:22 /tmp
drwxr-xr-x   - hdfs supergroup          0 2020-11-15 12:21 /user
[root@hp1 ~]# 
[root@hp1 ~]# hadoop fs -chmod 777 /user
chmod: changing permissions of '/user': Permission denied. user=root is not the owner of inode=/user
[root@hp1 ~]# 
[root@hp1 ~]# sudo -u hdfs hadoop fs -chmod 777 /user
[root@hp1 ~]# 
[root@hp1 ~]# 
[root@hp1 ~]# hadoop fs -ls /
Found 2 items
drwxrwxrwt   - hdfs supergroup          0 2020-11-15 12:22 /tmp
drwxrwxrwx   - hdfs supergroup          0 2020-11-15 12:21 /user
[root@hp1 ~]# 

重新测试:

    > 
    > select count(*) from fact_sale;
Query ID = root_20201119153419_cdcbb275-4439-47d6-a240-30641dc180fd
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
20/11/19 15:34:19 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm69
Starting Job = job_1605767427026_0001, Tracking URL = http://hp3:8088/proxy/application_1605767427026_0001/
Kill Command = /opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/hadoop/bin/hadoop job  -kill job_1605767427026_0001
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2020-11-19 15:34:58,689 Stage-1 map = 0%,  reduce = 0%
Ended Job = job_1605767427026_0001 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 HDFS EC Read: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

标签:文件,set,hp1,CDH,job,user,权限,root,reducers
来源: https://blog.csdn.net/u010520724/article/details/114582833