控制面板

==================================================================
Congratulations! Installed successfully!
==================================================================
外网面板地址: http://10.8.0.109:8888/b8f1295d
内网面板地址: http://192.168.0.15:8888/b8f1295d
username: 2qwdqwnf
password: d0328eec
==================================================================
外网面板地址: http://10.8.0.125:8888/7701ddeb
内网面板地址: http://192.168.0.13:8888/7701ddeb
username: zyxzuefn
password: 9d936f5c
==================================================================
外网面板地址: http://10.8.0.137:8888/e187bb97
内网面板地址: http://192.168.0.16:8888/e187bb97
username: yovfijz6
password: 1dbf30a0
==================================================================
外网面板地址: http://10.8.0.117:8888/e29d272f
内网面板地址: http://192.168.0.17:8888/e29d272f
username: rccvc0ra
password: f0eb1ac6

  1. # 10.8.0.125
  2. ssh LTSR003
  3. # 10.8.0.109
  4. ssh LTSR005
  5. # 10.8.0.137
  6. ssh LTSR006
  7. # 10.8.0.117
  8. ssh LTSR007

查看 mysql 是否已经安装

ps -el | grep mysql

Hadoop

单机版

docker run -i -t -p 50071:50070 -p 9001:9000 -p 8089:8088 -p 8041:8040 -p 8043:8042  -p 49708:49707  -p 50012:50010  -p 50076:50075  -p 50091:50090 sequenceiq/hadoop-docker:2.6.0 /etc/bootstrap.sh -bash

集群版

Slor

docker run --name taotao-solr -d -p 8983:8983 -t solr:7.4.0

Hadoop

systemctl restart network
vim /etc/profile
source /etc/profile

#JAVA_HOME
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.262.b10-0.el7_8.x86_64
JRE_HOME=$JAVA_HOME/jre
#HADOOP_HOME

export HADOOP_HOME=/opt/module/hadoop-2.7.2
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$JRE_HOME/bin

export JAVA_HOME JRE_HOME CLASS_PATH PATH
# 配置yum 源


# 安装vim
yum -y install vim*

[root@ltsr005 ~]# unzip -q hadoop-2.7.2.zip -d /opt/module/
[root@ltsr005 ~]# cd /opt/module/
[root@ltsr005 module]# ls
hadoop-2.7.2
[root@ltsr005 module]# cd hadoop-2.7.2/
[root@ltsr005 hadoop-2.7.2]# ls
bin  etc  include  lib  libexec  LICENSE.txt  NOTICE.txt  README.txt  sbin  share
[root@ltsr005 hadoop-2.7.2]# pwd
/opt/module/hadoop-2.7.2

#官方Grep案例
[atguigu@hadoop101 hadoop-2.7.2]$ mkdir input

cp etc/hadoop/*.xml input

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar grep input output 'dfs[a-z.]+'

cat output/*
#官方WordCount案例
[atguigu@hadoop101 hadoop-2.7.2]$ mkdir wcinput

[atguigu@hadoop101 hadoop-2.7.2]$ cd wcinput
[atguigu@hadoop101 wcinput]$ touch wc.input

[atguigu@hadoop101 wcinput]$ vi wc.input
hadoop yarn
hadoop mapreduce
atguigu
atguigu

cd..

[atguigu@hadoop101 hadoop-2.7.2]hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount wcinput wcoutput

cat wcoutput/part-r-00000

分布式集群搭建

scp -r /opt/module root@LTSR003:/opt/module

rsync主要用于备份和镜像。具有速度快、避免复制相同内容和支持符号链接的优点。
rsync和scp区别:用rsync做文件的复制要比scp的速度快,rsync只对差异文件做更新。scp是把所有文件都复制过去。

选项 功能
-r 递归
-v 显示复制过程
-l 拷贝符号连接
rsync -rvl /opt/software/ root@LTSR003:/opt/software
#!/bin/bash
# 1.获取输入参数个数,如果没有参数,直接退出
pcount=$#
if((pcount==0)); then
 echo no args;
 exit;
fi
# 2.获取文件名称
p1=$1
fname=`basename $p1`
echo fname=$fname
# 3.获取上级目录到绝对路径
pdir=`cd -P $(dirname $p1); pwd`
echo pdir=$pdir
# 4.获取当前用户名称
user=`whoami`
# 5.循环
for i in LTSR003 LTSR005 LTSR006 LTSR007 
do
 echo ------------------- $i --------------
 if [ "$i" = "${HOSTNAME}" ];then
    echo "I'm the host ${HOSTNAME},do nothing."
 else
    rsync -rvl $pdir/$fname $user@$i:$pdir
 fi
done

xsync 123/

HDFS

参数 描述 默认 配置文件 例子值
fs.default.name namenode namenode RPC交互端口 8020 core-site.xml hdfs://master:8020/
dfs.http.address NameNode web管理端口 50070 hdfs- site.xml 0.0.0.0:50070
dfs.datanode.address datanode 控制端口 50010 hdfs -site.xml 0.0.0.0:50010
dfs.datanode.ipc.address datanode的RPC服务器地址和端口 50020 hdfs-site.xml 0.0.0.0:50020
dfs.datanode.http.address datanode的HTTP服务器和端口 50075 hdfs-site.xml 0.0.0.0:50075

2.2 MR端口

参数 描述 默认 配置文件 例子值
mapred.job.tracker job-tracker交互端口 8021 mapred-site.xml hdfs://master:8021/
job tracker的web管理端口 50030 mapred-site.xml 0.0.0.0:50030
mapred.task.tracker.http.address task-tracker的HTTP端口 50060 mapred-site.xml 0.0.0.0:50060