image.png

    1. [root@hadoop101 hadoop]# start-dfs.sh
    2. Starting namenodes on [hadoop101 hadoop102]
    3. ERROR: Attempting to operate on hdfs namenode as root
    4. ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
    5. Starting datanodes
    6. ERROR: Attempting to operate on hdfs datanode as root
    7. ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
    8. Starting journal nodes [hadoop101 hadoop102 hadoop103]
    9. ERROR: Attempting to operate on hdfs journalnode as root
    10. ERROR: but there is no HDFS_JOURNALNODE_USER defined. Aborting operation.
    11. Starting ZK Failover Controllers on NN hosts [hadoop101 hadoop102]
    12. ERROR: Attempting to operate on hdfs zkfc as root
    13. ERROR: but there is no HDFS_ZKFC_USER defined. Aborting operation.

    在Hadoop安装目录下找到sbin文件夹
    在里面修改四个文件
    对于start-dfs.sh和stop-dfs.sh文件,添加下列参数:

    1. #!/usr/bin/env bash
    2. HDFS_DATANODE_USER=root
    3. HDFS_JOURNALNODE_USER=root
    4. HDFS_DATANODE_SECURE_USER=hdfs
    5. HDFS_NAMENODE_USER=root
    6. HDFS_ZKFC_USER=root

    对于start-yarn.sh和stop-yarn.sh文件,添加下列参数:

    1. #!/usr/bin/env bash
    2. YARN_RESOURCEMANAGER_USER=root
    3. HADOOP_SECURE_DN_USER=yarn
    4. YARN_NODEMANAGER_USER=root

    重新开始start…就可以了。