容器时区设置
## 查看时区bash> date -R## 复制相应的时区文件,替换系统时区文件#### 如果容器中没有 /usr/share/zoneinfo/Asia/Shanghai,则从宿主机拷贝cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtimedocker cp /path '容器id':/path/filename
1、Docker安装
- 卸载老版本
sudo yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine
- 配置镜像仓库
sudo yum install -y yum-utilssudo yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repo # 默认sudo yum-config-manager \--add-repo \http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # 阿里云地址
- 安装docker 相关内容
yum makecache fast # 更新软件包sudo yum install -y docker-ce docker-ce-cli containerd.io # 安装sudo yum install -y docker-ce-1.13.1 docker-ce-cli-1.13.1 containerd.io
- 启动 docker
sudo systemctl start docker#将docker安装目录移动到 /data/docker# /data 为另一块分区mv /var/lib/docker /data/docker#创建软连接ln -s /data/docker /var/lib/docker
- 阿里云镜像加速
sudo mkdir -p /etc/dockersudo tee /etc/docker/daemon.json <<-'EOF'{"registry-mirrors": ["https://nzxppyu8.mirror.aliyuncs.com"]}EOFsudo systemctl daemon-reloadsudo systemctl restart docker### 设置开机自启动systemctl enable docker# 或者systemctl disable docker.service
- 运行
hello-world
sudo docker run hello-world
- 安装 compose ```bash https://github.com/docker/compose/releases
<a name="2ef29986"></a>## 2、Docker基本命令```bashdocker pull 'name' # 安装docker rmi -f 'REPOSITORY' # 删除 -f 强制删除docker rmi -f $(docker images -q) # 删除全部镜像docker ps -a -q|xargs docker rm # 删除全部镜像# 删除 tag 等为 <none> 的镜像sudo docker images|grep none|awk '{print $3}'|xargs sudo docker rmi
sudo docker rm -f ee0c12940191 # 移除指定容器
docker start 'id' # 启动停止的容器docker restart 'id' # 重启容器docker stop 'id' # 停止容器docker kill 'id' # 强制停止
# 日志 -tf:显示日志 --tail number 要显示的日志条数docker logs -f -t --tail number 'id'# 网络docker network create custom_netdocker network ls
# 查看容器的进程信息docker inspect centos# 查看镜像构建历史docker history 'IMAGE ID'# 查看容器的 CPU,MEM 等占用率docker stats
# 进入容器docker run -it centos /bin/bash # -it:以交互方式进入 centos 容器docker run -d 'id' # -d 后台运行exit # 退出容器,并停止容器运行ctrl + P + Q # 退出容器,并保持当前容器运行docker ps -a # 查看当前运行的容器 -a 曾经运行的(可选)# 进入容器[root@iZwz9actuhd1532kyp2llsZ ~]# docker ps # 查看进程状态CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESa5a7b998aee3 centos "/bin/bash" 14 minutes ago Up 14 minutes elated_dhawan[root@iZwz9actuhd1532kyp2llsZ ~]# docker exec -it a5a7b998aee3 /bin/bash-------------或者------------ # docker attach a5a7b998aee3
# 文件拷贝-- 把容器中的文件拷贝到容器外docker cp '容器id':/path/filename /path## 同步时间docker cp /usr/share/zoneinfo/Asia/Shanghai '容器id':/etc/localtimedocker restart '容器id'## 把容器打包成镜像docker commit 容器id 镜像名:镜像版本docker save 镜像名:镜像版本 -o xxx.tar ||| docker export 容器id > xxx.tardocker load < xxx.tar ||| docker import xxx.tar 镜像名:镜像版本## 镜像和容器 导出和导入的区别镜像导入 是复制的过程,导入的镜像 ID 和导出的镜像 ID 相同;容器导入 是将当前容器变成一个新的镜像。## save 和 export 区别save 保存镜像所有的信息-包含历史,体积更大;export 只导出当前的信息,将丢弃所有的历史记录和元数据信息,体积更小。## load 和 import 区别load 不能重命名导入的镜像;import 可以重命名导入的镜像,并且还可以在导入的同时执行对容器进行修改的 DockerFile 指令。
3、实例
3.1 可视化工具——Portainer
docker pull portainer/portainer# 单机运行docker run -d -p 9001:9000 \--restart=always \-v /var/run/docker.sock:/var/run/docker.sock \--name prtainer-test1 \portainer/portainer---------------------------------------------------------------------访问 http://ip:9000/ ---------# 集群运行docker run -d -p 9000:9000 --restart=always --name prtainer-test portainer/portainer
3.2 安装 RabbitMQ
#指定版本,该版本包含了web控制页面docker pull rabbitmq:management# 运行#方式一:默认guest 用户,密码也是 guestdocker run -d --hostname my_RabbitMQ --name RabbitMQ -p 15672:15672 -p 5672:5672 rabbitmq:management#方式二:设置用户名和密码docker run -d --hostname my_RabbitMQ --name RabbitMQ -e RABBITMQ_DEFAULT_USER=admin -e RABBITMQ_DEFAULT_PASS=admin -p 15672:15672 -p 5672:5672 rabbitmq:managementdocker run -d --name RabbitMQ_K -e default_user=admin -e default_pass=admin123 -p 15673:15672 -p 5673:5673 rabbitmq:management------------------访问 http://ip:15672/ ----------------######## js WebSocketdocker run -d --hostname my_RabbitMQ --name RabbitMQ -e RABBITMQ_DEFAULT_USER=admin -e RABBITMQ_DEFAULT_PASS=admin -p 15672:15672 -p 5672:5672 -p 15674:15674 rabbitmq:management## 开启插件rabbitmq-plugins enable rabbitmq_web_stomp rabbitmq_web_stomp_examples
3.3 安装 Tomcat
# 下载docker pull tomcat# 后台启动docker run -d --name tomcat_test1 -p 8089:8080 tomcat# 进入容器docker exec -it 'id' /bin/bash# 文件拷贝cp -r webapps.dist/* webappsmkdir -p /home/tomcat/webapps /home/tomcat/logs------------------访问 http://ip:8089/ ----------------docker run -d -p 8888:8080 --name mytomcat -v /home/zlh/mydata/web/webapps:/usr/local/tomcat/webapps/ -v /home/zlh/mydata/web/logs:/usr/local/tomcat/logs tomcatdocker run -d -p 8888:8080 --name mytomcat -v /home/zlh/tomcat/webapps:/usr/local/tomcat/webapps/ -v /home/zlh/tomcat/logs:/usr/local/tomcat/logs tomcat
3.4 Elasticsearch 7.11.1
docker pull elasticsearch:7.11.1# 数据放置在安全目录,挂载mkdir -p /home/elasticsearch/config /home/elasticsearch/data /home/elasticsearch/pluginstouch /home/elasticsearch/config/elasticsearch.ymlchmod -R 777 /home/elasticsearch# 启动docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 --restart=always -e ES_JAVA_OPTS="-Xms512m -Xmx512m"-e "discovery.type=single-node"-v /home/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml-v /home/elasticsearch/data:/usr/share/elasticsearch/data-v /home/elasticsearch/plugins:/usr/share/elasticsearch/plugins--network my_net --network-alias es elasticsearch:7.11.1# ES_JAVA_OPTS="-Xms64m -Xmx512m" 设置占用内存大小# 查看日志,是否启动成功docker logs -f 容器id###### 查看并修改max_map_count 大小# 查看max_map_count :cat /proc/sys/vm/max_map_count# 设置max_map_countsysctl -w vm.max_map_count=262144########## 若连接被拒绝 ############ 修改 elasticsearch.yml 配置文件,在配置文件中加入network.host: 0.0.0.0 # 允许来自其他 IP 的连接# 安装 kibanadocker pull kibana:7.10.1mkdir -p /home/kibana/config/vi /home/kibana/config/kibana.yml####### kibana.yml 中添加以下内容 ############ Default Kibana configuration for docker targetserver.name: kibanaserver.host: "0"elasticsearch.hosts: [ "http://ip:9200" ]xpack.monitoring.ui.container.elasticsearch.enabled: true################## 启动docker run -d --restart=always --name kibana -p 5601:5601 -v /home/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml -e "I18N_LOCALE=zh-CN" kibana:7.10.1# 查看日志,是否启动成功docker logs -f 容器id# 安装 ik 分词# 下载连接 https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.10.1/elasticsearch-analysis-ik-7.10.1.zip# 在 /home/elasticsearch/plugins 目录下新建文件夹 ik ,将压缩解压至该目录下,再重启 ES
3.4.1设置账号密码
# 1、修改 elasticsearch.yml,加上以下配置项:xpack.security.enabled: truexpack.license.self_generated.type: basicxpack.security.transport.ssl.enabled: true## 如果是docker,添加配置后要重启容器# 2、设置密码,elasticsearch/bin 目录下,执行命令:./elasticsearch-setup-passwords interactive# 2.1、修改指定账号的密码 --- 把 elastic 密码改成 123456curl -H "Content-Type:application/json" -XPOST -u elastic 'http://localshot:9200/_xpack/security/user/elastic/_password' -d '{ "password" : "123456" }' ## 执行之后,输入一遍原始密码,即可以成功修改成新的密码了# 3、修改Kibana配置 ---- kibana.ymlelasticsearch.username: "kibana_system"elasticsearch.password: "953598751"# 4、修改 logstash 配置 -----logstash.ymlxpack.monitoring.enabled: truexpack.monitoring.elasticsearch.username: logstash_systemxpack.monitoring.elasticsearch.password: 953598751xpack.monitoring.elasticsearch.hosts: ["http://ip:9200"]##### xxxxx.conf 文件elasticsearch {hosts => ["localhost:9200"]index => "xxxxx"document_id => "%{id}"user => "elastic"password => "953598751"}
3.4.2 Elastic_Dump
docker pull taskrabbit/elasticsearch-dump---------导出(导出到文件)docker run --rm -ti -v /home/java/es_dump:/tmp taskrabbit/elasticsearch-dump --input=http://121.196.158.48:9200/index_polylines --output=/tmp/index_polylines_mapping.json --type=mappingdocker run --rm -ti -v /home/java/es_dump:/tmp taskrabbit/elasticsearch-dump --input=http://121.196.158.48:9200/index_polylines --output=/tmp/index_polylines_data.json --type=data----------导入(文件导入)docker run --rm -ti -v /home/java/es_dump:/tmp --network custom_net --network-alias esdump taskrabbit/elasticsearch-dump --input=/tmp/index_polylines_mapping.json --output=http://elasticsearch:9200/index_polylines --type=mappingdocker run --rm -ti -v /home/java/es_dump:/tmp --network custom_net --network-alias esdump taskrabbit/elasticsearch-dump --input=/tmp/index_polylines_data.json --output=http://elasticsearch:9200/index_polylines --type=data----- ip 拷贝docker run --rm -ti -v /home/java/es_dump:/tmp --network custom_net --network-alias esdump taskrabbit/elasticsearch-dump --input=http://121.196.158.48:9200/index_add_bridge --output=http://elasticsearch:9200/index_add_bridge --type=mappingdocker run --rm -ti -v /home/java/es_dump:/tmp --network custom_net --network-alias esdump taskrabbit/elasticsearch-dump --input=http://121.196.158.48:9200/index_add_bridge --output=http://elasticsearch:9200/index_add_bridge --type=data
3.5 Nginx
docker pull nginx # 拉取镜像mkdir -p /home/nginx/conf /home/nginx/conf.d # 创建配置文件目录# 拷贝配置文件docker run --name nginx01 -d nginx:latestdocker cp nginx01:/etc/nginx/nginx.conf /home/nginx/conf #把容器中的nginx.conf文件复制到conf目录下docker cp nginx01:/etc/nginx/conf.d/default.conf /home/nginx/conf.d #把容器中的default.conf文件复制到conf目录下docker rm -f nginx01 #删除镜像# 创建容器docker run -it -d --name mynginx -p 888:80 -v /home/nginx/html:/usr/share/nginx/html-v /home/nginx/conf/nginx.conf:/etc/nginx/nginx.conf-v /home/nginx/conf.d:/etc/nginx/conf.d-v /home/nginx/logs:/var/log/nginx nginx
- 转发到多个 Tomcat
# default.confupstream blance{#ip_hash; # 每个访客固定访问一个后端服务器,可以解决 session 的问题server 172.18.0.8:8080 weight=1;server 172.18.0.9:8080 weight=1;#fair; # 按后端服务器的响应时间来分配请求,响应时间短的优先分配}location / {root /usr/share/nginx/html;proxy_pass http://blance;index index.html index.htm;}
- 根据 URL 转发到指定 Tomcat
# url中,根据 http://ip:port/cug 访问,转发到指定 Tomcatlocation ~ /cug {proxy_pass http://172.18.0.9:8080; # 目标 Tomcat 的 webapps下要有 cug 文件夹}
- 静态资源
# 静态资源放在 root 后面路径下的 pic 文件夹中location /pic/ {autoindex on; # 列出整个目录autoindex_exact_size off; # 显示出文件的大概大小,单位是kB或者MB或者GB,默认为on,显示出文件的确切大小,单位是bytesautoindex_localtime on; # 显示的文件时间为文件的服务器时间,默认为off,显示的文件时间为GMT时间root /usr/share/nginx/html;}
启动: ./nginx
停止:./nginx -s stop
重启: ./nginx -s reload
重载:./nginx -s reload
systemctl start nginx.service 启动nginxsystemctl stop nginx.service 结束nginxsystemctl restart nginx.service 重启nginx
3.6 SuperMap iServer
docke pull supermap/iserver:10.1.2adocker run -d --name supermap_iserver -p 8090:8090 -v /home/supermap/my_opts:/opt/iserverOPTs -v /home/supermap/Desktop:/etc/icloud/supermap-iserver-10.1.2a-linux64-deploy/Desktop supermap/iserver:10.1.2a----------------------------------------------------------------------------------------------------------------/etc/icloud/supermap-iserver-10.1.2a-linux64-deploy/Desktopsupermap_iserver_1001_18915_4646_linux64_deployfind / -name 'Desktop'docker run -d --name supermap_iserver -p 8090:8090 -v /home/supermap/my_opts:/opt/iserverOPTs supermap/iserver:10.0.1docker run -d -p 8090:8090 --name supermap_iserver -v /home/supermap/my_opts:/opt/iserverOPTs -v /home/supermap/Desktop:/etc/icloud/supermap_iserver_1001_18915_4646_linux64_deploy/desktop supermap/iserver:10.0.1
3.7 PHP 环境
dockerfile
FROM php:7.4-cliADD index.php /var/www/EXPOSE 8080WORKDIR /var/www/ENTRYPOINT ["php", "-S", "0.0.0.0:8080"]
index.php
<?php echo phpversion();?>
运行
docker build -t phpproject:0.1 .docker run -d -p 8081:8080 -v /home/php/:/var/www/ --network my_dq_net --network-alias phpproject --name myphp phpproject:0.1
测试
curl localhost:8081
3.8 lanproxy
### serverdocker run -d \--name lanproxy-server \-p 10000:8090 \-p 4900:4900 \-p 4993:4993 \-p 10001-10020:10001-10020 \--restart=always \-e LANPROXY_USERNAME="admin" \-e LANPROXY_PASSWORD="admin123" \franklin5/lanproxy-server### clientdocker run -d \--name lanproxy-client \-e LANPROXY_KEY="d2733bd0e661477587d175282b909b9f" \-e LANPROXY_HOST="47.97.214.127" \--restart=always \franklin5/lanproxy-client172.26.20.151
3.9 Redis
docker pull redis:6.0mkdir -p /home/redis/conftouch /home/redis/conf/redis.confdocker run -p 6379:6379 --name redis6.0 -v /home/redis/data:/data -v /home/redis/conf/redis.conf:/etc/redis/redis.conf -d --network my_net --network-alias redis6.0 --restart=always redis:6.0 redis-server /etc/redis/redis.confdocker run -p 6382:6379 --name redis2 -v /home/redis2/data:/data -v /home/redis2/conf/redis.conf:/etc/redis/redis.conf -d redis redis-server /etc/redis/redis.confdocker run -p 6379:6379 --name redis6.0 -v /home/redis/data:/data --sysctl net.core.somaxconn=1024 -v /home/redis/conf/redis.conf:/etc/redis/redis.conf -d redis redis-server /etc/redis/redis.conf## 错误overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to/etc/sysctl.conf andthen reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.# linux 虚拟机中: echo vm.overcommit_memory = 1 >> /etc/sysctl.conf# sysctl vm.overcommit_memory=1The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.# linux 虚拟机中: echo 511 > /proc/sys/net/core/somaxconn
3.10 Nacos
# 1、拉取镜像docker pull nacos/nacos-server:1.4.2# 2、在 mysql 中执行 nacos-mysql.sql# 3、启动docker run -d -p 8848:8848 \--name nacos \--network my_net \--network-alias nacos1.4.2 \--env MODE=standalone \--env SPRING_DATASOURCE_PLATFORM=mysql \--env MYSQL_SERVICE_HOST=mysql5.7 \--env MYSQL_SERVICE_PORT=3306 \--env MYSQL_SERVICE_DB_NAME=nacos_config \--env MYSQL_SERVICE_USER=root \--env MYSQL_SERVICE_PASSWORD=953598751 \nacos/nacos-server:1.4.2# 4、把容器中的 nacos 文件复制出来docker cp -a nacos:/home/nacos /home# 5、删除 nacos 容器docker rm -f nacos# 6、重启启动 nacosdocker run -d -p 8848:8848 \--name nacos \--restart=always \--network my_net \--network-alias nacos1.4.2 \--env MODE=standalone \--env SPRING_DATASOURCE_PLATFORM=mysql \--env MYSQL_SERVICE_HOST=mysql5.7 \--env MYSQL_SERVICE_PORT=3306 \--env MYSQL_SERVICE_DB_NAME=nacos_config \--env MYSQL_SERVICE_USER=root \--env MYSQL_SERVICE_PASSWORD=953598751 \-v /home/nacos/conf:/home/nacos/conf \-v /home/nacos/logs:/home/nacos/logs \-v /home/nacos/data:/home/nacos/data \nacos/nacos-server:1.4.2
3.11 Sentinel
docker pull bladex/sentinel-dashboard:latestdocker run --name sentinel -d -p 8858:8858 --restart=always --network my_net bladex/sentinel-dashboard
3.12 Hbase
docker pull harisekhon/hbasedocker run -d -h hbase192 \-p 2181:2181 \-p 9090:9090 \-p 9095:9095 \-p 16000:16000 \-p 16010:16010 \-p 16020:16020 \-p 16201:16201 \-p 16301:16301 \--name hbase \--restart=always \--network my_net --network-alias hbase192 \harisekhon/hbasehttp://192.168.31.192:16010/master-status
3.13 Zookeeper
docker pull zookeepermkdir -p /home/zookeeper/node1/datamkdir -p /home/zookeeper/node1/confmkdir -p /home/zookeeper/node1/logsdocker network create zk_netdocker run -d \--name zookeeper1 \--privileged=true \-p 2181:2181 \--restart=always \--network zk_net \-v /home/zookeeper/node1/data:/data \-v /home/zookeeper/node1/conf:/conf \-v /home/zookeeper/node1/logs:/datalog \zookeepervi /home/zookeeper/node1/conf/zoo.cfg################## zoo.cfg 配置 ############################ 客户端与服务器或者服务器与服务器之间维持心跳,也就是每个tickTime时间就会发送一次心跳。# 通过心跳不仅能够用来监听机器的工作状态,还可以通过心跳来控制Flower跟Leader的通信时间tickTime=2000# 集群中的follower服务器(F)与leader服务器(L)之间初始连接时能容忍的最多心跳数initLimit=10# 集群中flower服务器(F)跟leader(L)服务器之间的请求和答应最多能容忍的心跳数syncLimit=5# the port at which the clients will connectclientPort=2181audit.enable=true# 允许连接的客户端数目,0为不限制,通过IP来区分不同的客户端#maxClientCnxns=60## Be sure to read the maintenance section of the# administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge feature#autopurge.purgeInterval=1## Metrics Providers## https://prometheus.io Metrics Exporter#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider#metricsProvider.httpPort=7000#metricsProvider.exportJvmInfo=truedataDir=/datadataLogDir=/datalog#######################################################
集群部署 docker-compose.yml ```yaml version: ‘3.1’ services:
zoo1:image: zookeeperrestart: alwayscontainer_name: zoo1ports:- 2181:2181volumes:- /home/zookeeper/zoo1/data:/data- /home/zookeeper/zoo1/datalog:/datalogenvironment:ZOO_MY_ID: 1ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181ZOO_AUTOPURGE_PURGEINTERVAL: 1zoo2:image: zookeeperrestart: alwayscontainer_name: zoo2ports:- 2182:2181volumes:- /home/zookeeper/zoo2/data:/data- /home/zookeeper/zoo2/datalog:/datalogenvironment:ZOO_MY_ID: 2ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181ZOO_AUTOPURGE_PURGEINTERVAL: 1zoo3:image: zookeeperrestart: alwayscontainer_name: zoo3ports:- 2183:2181volumes:- /home/zookeeper/zoo3/data:/dada- /home/zookeeper/zoo3/datalog:/datalogenvironment:ZOO_MY_ID: 3ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
server.A=B:C:D
- A 是一个数字,表示这个是第几号服务器。集群模式下需要在zoo.cfg中dataDir指定的目录下创建一个文件myid,这个文件里面有一个数据就是A的值,Zookeeper启动时读取此文件,拿到里面的数据与zoo.cfg里面的配置信息比较从而判断到底是哪个server。
- B 是这个服务器的地址。
- C 是这个服务器Follower与集群中的Leader服务器交换信息的端口。
- D 是万一集群中的Leader服务器挂了,需要一个端口来重新进行选举,选出一个新的Leader,而这个端口就是用来执行选举时服务器相互通信的端口。
<a name="LAksH"></a>### 3.14 创建 GDAL-java8 镜像> Dockerfile```dockerfileFROM osgeo/gdalRUN rm -f /etc/apt/sources.list \&& echo "deb http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse">> /etc/apt/sources.list \&& echo "deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse">> /etc/apt/sources.list \&& echo "deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse">> /etc/apt/sources.list \&& echo "deb http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse">> /etc/apt/sources.list \&& echo "deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse">> /etc/apt/sources.list \&& apt-get updateRUN apt-get install openjdk-8-jdk -yRUN ln -sf /usr/share/zoneinfo/Asia/ShangHai /etc/localtime \&& echo "Asia/Shanghai" > /etc/timezone \&& dpkg-reconfigure -f noninteractive tzdata
docker build -t java8gdal:0.1 .docker run -d -it java8gdal:0.1
4、Docker 高级
4.1 提交
# 提交容器为一个副本docker commit -m='describe' -a="author" 'id' '目标镜像':[TAG]
4.2 数据卷 —- 数据同步(双向)
#直接在本地修改数据,可直接同步到容器docker run -it -v 主机目录:容器目录 -p... '镜像ID'-v 容器内路径 # 匿名挂载-v 卷名:容器内路径 # 具名挂载-v /主机路径:容器路径 # 指定路径挂载
docker volume ls # 查看卷的情况#具名挂载---> 挂载到的目录:/var/lib/docker/volumes/xxxxx/_datadocker run -d --name nginx02 -v juming-nginx:/etc/nginx nginx
4.3 Dockerfile
FROM # 基础镜像MAINTAINEG # 镜像是谁写的,姓名+邮箱RUN # 镜像构建的时候需要运行的命令ADD # eg:添加 tomcat安装包WORKDIR # 镜像工作目录VOLUME # 挂载到目录EXPOST # 暴露端口CMD # 指定这个容器启动的时候要运行的命令,只有最后一个会生效,可被替代ENTRYPOINT # 指定这个容器启动的时候要运行的命令,可以追加命令ONBUILD # 当构建一个被集成 DockerFile,这个时候就会运行 ONBUTILD 的指令,触发指令COPY # 类似 ADD ,将文件拷贝到镜像中ENV # 构建的时候设置环境变量

4.4 自定义网络
# 查看所有的 docker 网络docker network ls# 自定义网络docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet# 查看信息docker network inspect mynet# 把容器放在自己的网络中----可以通过 IP 或容器名字 ping 通docker run -d -P --name tomcat01 --net mynet tomcatdocker run -d -P --name tomcat02 --net mynet tomcat# 连接到另一个容器docker network connect mynet tomcat01
4.x 实例
4.x.1 MySQL
docker pull mysql:5.7# 官方实例: docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag# -e 环境配置docker run -d -p 3306:3306 -v /home/mysql/conf:/etc/mysql/ -v /home/mysql/data:/var/lib/mysql -v/home/mysql/log:/var/log/mysql -e MYSQL_ROOT_PASSWORD=WHUaliyun425 --restart=always --name mysql5.7 --network my_net --network-alias mysql5.7 mysql:5.7docker run -d -p 3306:3306 -v /home/mysql/conf:/etc/mysql/ -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=WHUaliyun425 --restart=always --name mysql5.7 mysql:5.7# 配置 mysql/conf/my.cnf[client]default-character-set=utf8[mysql]default-character-set=utf8[mysqld]init_connect='SET collation_connection = utf8_unicode_ci'init_connect='SET NAMES utf8'character-set-server=utf8collation-server=utf8_unicode_ciskip-character-set-client-handshakeskip-name-resolve
安装 MYSQL:8.0
docker pull mysql:8.0docker run -p 3306:3306 --name mysqltest1 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 -d mysql:8.0 # 必须设置密码,否则启动无法启动# 进入 Mysql 容器mysql -u root -p 123456use mysql;select host,user,authentication_string,plugin from user; # 查看加密方案---pluginalter user 'root'@'%' identified with mysql_native_password by '123456'; # 修改加密方案为: mysql_native_passwordalter user 'root'@'%' identified by '123456' password expire never;ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '123456';# 解决错误--1045flush privileges; # 更新权限
若报错:1045 Access denied for user ‘root’@’localhost’ (using password:YES)
# 解决方案use mysql;ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '123456';flush privileges;# mysql 5.7 授权所有 ipGRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'rootpassword' WITH GRANT OPTION;# mysql 5.7 授权指定 ipGRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.2.201' IDENTIFIED BY 'rootpassword' WITH GRANT OPTION;
MySQL 修改密码
use mysql;UPDATE user SET Password = PASSWORD('WHUaliyun425') WHERE user = 'root';FLUSH PRIVILEGES;# 5.7update user set authentication_string = 'WHUaliyun425' where `user` = 'root';
4.x.2 PostgreSQL——PostGSI
# postgresqldocker pull postgres:12docker run --name postgreSQL_test -e POSTGRES_PASSWORD=953598751 -d -p 5432:5432 -v /home/postgresql:/var/lib/postgresql/ postgres:12
安装 PostGIS
docker pull kartoza/postgis:13.0docker run --name=postgis -d -e POSTGRES_USER=postgres -e POSTGRES_PASS=953598751 -e POSTGRES_DBNAME=gis -e ALLOW_IP_RANGE=0.0.0.0/0 --network my_net --network-alias psql -p 5432:5432 -v /home/postgis:/var/lib/postgresql --restart=always kartoza/postgis:13.0
4.x.3 基于 CentOS 构建一个镜像
- 编写 dockerfile 文件
FROM centosMAINTAINER fengyuaho<953598751@qq.com>ENV MYPATH /usr/localWORKDIR $MYPATHRUN yum -y install vimRUN yum -y install net-toolsEXPOSE 80CMD echo $MYPATHCMD echo "-----end-----"CMD /bin/bash
- 通过命令构建 最后有一个点
docker build -f 'file' -t '镜像名':[TAG] . # -f:文件路径 -t:镜像:【TAG】# docker build -f mydockerfiel_centos -t mycentos:0.1 .
4.x.4 构建 Tomcat 镜像
- 准备 Tomcat 和 JDK 压缩包
- 创建 Dockerfile 文件
FROM centosMAINTAINER fengyuaho<953598751@qq.com>COPY read.txt /usr/local/read.txtADD jdk-8u271-linux-x64.tar.gz /usr/local/ADD apache-tomcat-9.0.40.tar.gz /usr/local//home/mydockerimage/readme.txtRUN yum install -y vimENV MYPATH /usr/local/WORKDIR $MYPATHENV JAVA_HOME /usr/local/jdk1.8.0_271ENV CLASSPATH $JAVA_HOME/lib/dt.jat:$JAVA_HOME/lib/tools.jarENV CATALINA_HOME /usr/local/apache-tomcat-9.0.40ENV CATALINA_BASH /usr/local/apache-tomcat-9.0.40ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/binEXPOSE 8080CMD /usr/local/apache-tomcat-9.0.40/bin/startup.sh && tail -f /usr/local/apache-tomcat-9.0.40/logs/catalina.out
- 镜像构建
docker build -t mytamcat:0.1 .
- 启动自定义的 Tomacat
docker run -d -p 8083:8080 --name mytomcat--01 -v /home/tomcat/test:/usr/local/apache-tomcat-9.0.40/webapps/test -v /home/tomcat/logs:/usr/local/apache-tomcat-9.0.40/logs mytomcat:0.1
4.x.5 Redis 集群
- 安装 Redis
docker pull redismkdir -p /home/redisMaster/conftouch /home/redisMaster/conf/redis.confdocker run -p 6379:6379 --name redisMaster -v /home/redisMaster/data:/data -v /home/redisMaster/conf/redis.conf:/etc/redis/redis.conf -d redis redis-server /etc/redis/redis.confdocker run -p 6380:6379 --name redisMaster -v /home/redisMaster/data:/data -v /home/redisMaster/conf/redis.conf:/etc/redis/redis.conf -d redis:6.0 redis-server /etc/redis/redis.confmkdir -p /home/redisSlave/conftouch /home/redisSlave/conf/redis.confdocker run -p 6380:6379 --name redisSlave -v /home/redisSlave/data:/data -v /home/redisSlave/conf/redis.conf:/etc/redis/redis.conf -d redis redis-server /etc/redis/redis.conf### slave 的 redis.confslaveof 192.168.31.155 6379 ######### 完成同步 ##################### 无法读取 dump.rdb 文件########### The filename where to dump the DBdbfilename dump.rdb# Note that you must specify a directory here,not a file name.dir /current/working/directory
5、错误
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.24/images/create?fromImage=portainer%2Fportainer&tag=latest: dial unix /var/run/docker.sock: connect: permission denied解决: sudo chmod a+rw /var/run/docker.sock
