其他分享
首页 > 其他分享> > 跟着狂神学Docker(精髓篇)

跟着狂神学Docker(精髓篇)

作者:互联网

容器数据卷

什么是容器数据卷

docker的理念回顾

将应用和环境打包成一个镜像!

数据?如果数据都在容器中,那么我们容器删除,数据就会丢失!需求:数据可以持久化

MySQL,容器删除了,删库跑路!需求:MySQL数据可以存储在本地!

容器之间可以有一个数据共享的技术!Docker容器中产生的数据,同步到本地!

这就是卷技术!目录的挂载,将我们容器内的目录,挂载到Linux上面!

在这里插入图片描述
总结一句话:容器的持久化和同步操作!容器间也是可以数据共享的!

使用数据卷

方式一 :直接使用命令挂载 -v

-v, --volume list                    Bind mount a volume

docker run -it -v 主机目录:容器内目录  -p 主机端口:容器内端口
# /home/ceshi:主机home目录下的ceshi文件夹  映射:centos容器中的/home
[root@localhost home]# docker run -it -v /home/ceshi:/home centos /bin/bash
#这时候主机的/home/ceshi文件夹就和容器的/home文件夹关联了,二者可以实现文件或数据同步了
#这里解答一下这不是同步 这是 在磁盘上使用同一个分区物理地址是一个

#新的窗口
#通过 docker inspect 容器id 查看
[root@localhost home]# docker inspect 5b1e64d8bbc0

在这里插入图片描述
在这里插入图片描述

这里解答一下这不是同步 这是 在磁盘上使用同一个分区物理地址是一个

再来测试!

1、停止容器

2、宿主机修改文件

3、启动容器

4、容器内的数据依旧是同步的
在这里插入图片描述
好处:我们以后修改只需要在本地修改即可,容器内会自动同步!

实战:安装MySQL

Docker商店

https://hub.docker.com/

思考:MySQL的数据持久化的问题

# 获取mysql镜像
[root@localhost home]# docker pull mysql:5.7
# 运行容器,需要做数据挂载 #安装启动mysql,需要配置密码的,这是要注意点!
# 参考官网hub 
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag

#启动我们得
-d 后台运行
-p 端口映射
-v 卷挂载
-e 环境配置
-- name 容器名字
[root@localhost home]#  docker run -d -p 3306:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql01 mysql:5.7

# 启动成功之后,我们在本地使用sqlyog来测试一下
# sqlyog-连接到服务器的3306--和容器内的3306映射 

# 在本地测试创建一个数据库,查看一下我们映射的路径是否ok!

[root@localhost home]# ls
ceshi  huang.java  huangjialin  huang.txt  mylinux  mysql  test.java  testTmp
[root@localhost home]# cd mysql
[root@localhost mysql]# ls
conf  data
[root@localhost mysql]# cd data/
[root@localhost data]# ls
auto.cnf    client-cert.pem  ibdata1      ibtmp1              private_key.pem  server-key.pem
ca-key.pem  client-key.pem   ib_logfile0  mysql               public_key.pem   sys
ca.pem      ib_buffer_pool   ib_logfile1  performance_schema  server-cert.pem

在这里插入图片描述

新建数据库

在这里插入图片描述

多了test文件

[root@localhost data]# ls
auto.cnf    client-cert.pem  ibdata1      ibtmp1              private_key.pem  server-key.pem
ca-key.pem  client-key.pem   ib_logfile0  mysql               public_key.pem   sys
ca.pem      ib_buffer_pool   ib_logfile1  performance_schema  server-cert.pem  test

假设我们把容器删除

[root@localhost data]# docker rm -f mysql01
mysql01
[root@localhost data]# docker ps
CONTAINER ID   IMAGE     COMMAND       CREATED          STATUS          PORTS     NAMES
5b1e64d8bbc0   centos    "/bin/bash"   56 minutes ago   Up 39 minutes             keen_leavitt
[root@localhost data]# docker ps -a

在这里插入图片描述

[root@localhost home]# ls
ceshi  huang.java  huangjialin  huang.txt  mylinux  mysql  test.java  testTmp
[root@localhost home]# cd mysql
[root@localhost mysql]# ls
conf  data
[root@localhost mysql]# cd data/
[root@localhost data]# ls
auto.cnf    client-cert.pem  ibdata1      ibtmp1              private_key.pem  server-key.pem
ca-key.pem  client-key.pem   ib_logfile0  mysql               public_key.pem   sys
ca.pem      ib_buffer_pool   ib_logfile1  performance_schema  server-cert.pem

发现,我们挂载到本地的数据卷依旧没有丢失,这就实现了容器数据持久化功能。

具名和匿名挂载

# 匿名挂载
-v 容器内路径!
[root@localhost home]# docker run -d -P --name nginx01 -v /etc/nginx nginx

# 查看所有的volume(卷)的情况
[root@localhost home]# docker volume ls    
DRIVER              VOLUME NAME # 容器内的卷名(匿名卷挂载)
local               21159a8518abd468728cdbe8594a75b204a10c26be6c36090cde1ee88965f0d0
local               b17f52d38f528893dd5720899f555caf22b31bf50b0680e7c6d5431dbda2802c
         
# 这里发现,这种就是匿名挂载,我们在 -v只写了容器内的路径,没有写容器外的路径!

# 具名挂载 -P:表示随机映射端口
[root@localhost home]# docker run -d -P --name nginx01 -v /etc/nginx nginx
a35688cedc667161695f56bd300c12a6468312558c63a634d136bf8e73db4680

# 查看所有的volume(卷)的情况
[root@localhost home]# docker volume ls
DRIVER    VOLUME NAME
local     8bef6b16668574673c31978a4d05c212609add5b4ca5ea2f09ff8680bdd304a6
local     a564a190808af4cdca228f6a0ea3664dd14e34fee81811ed7fbae39158337141

# 具名挂载
[root@localhost home]# ls
ceshi  huang.java  huangjialin  huang.txt  mylinux  mysql  test.java  testTmp
[root@localhost home]# docker run -d -P --name nginx02 -v juming-nginx:/etc/nginx nginx	#具名挂载
330f5a75c39945c6bed44292925219ed0d8d37adc0b1bc37b37784aae405b520
[root@localhost home]# docker volume ls
DRIVER    VOLUME NAME
local     8bef6b16668574673c31978a4d05c212609add5b4ca5ea2f09ff8680bdd304a6
local     a564a190808af4cdca228f6a0ea3664dd14e34fee81811ed7fbae39158337141
local     juming-nginx	# 具名挂载

# 通过 -v 卷名:查看容器内路径
# 查看一下这个卷
[root@localhost home]# docker volume inspect juming-nginx
[
    {
        "CreatedAt": "2021-04-12T06:53:09-07:00",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/juming-nginx/_data",
        "Name": "juming-nginx",
        "Options": null,
        "Scope": "local"
    }
]

在这里插入图片描述
所有的docker容器内的卷,没有指定目录的情况下都是在**/var/lib/docker/volumes/自定义的卷名/_data**下,
我们通过具名挂载可以方便的找到我们的一个卷,大多数情况在使用的具名挂载

[root@localhost home]# cd /var/lib/docker
[root@localhost docker]# ls
buildkit  containers  image  network  overlay2  plugins  runtimes  swarm  tmp  trust  volumes
[root@localhost docker]# cd volumes/
[root@localhost volumes]# ls
8bef6b16668574673c31978a4d05c212609add5b4ca5ea2f09ff8680bdd304a6  backingFsBlockDev  metadata.db
a564a190808af4cdca228f6a0ea3664dd14e34fee81811ed7fbae39158337141  juming-nginx
[root@localhost volumes]# cd juming-nginx/
[root@localhost juming-nginx]# ls
_data
[root@localhost juming-nginx]# cd _data/
[root@localhost _data]# ls
conf.d  fastcgi_params  koi-utf  koi-win  mime.types  modules  nginx.conf  scgi_params  uwsgi_params  win-utf
[root@localhost _data]# cat nginx.conf 

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

如果指定了目录,docker volume ls 是查看不到的。

区分三种挂载方式

# 三种挂载: 匿名挂载、具名挂载、指定路径挂载
-v 容器内路径			#匿名挂载
-v 卷名:容器内路径		  #具名挂载
-v /宿主机路径:容器内路径 #指定路径挂载 docker volume ls 是查看不到的

拓展:

# 通过 -v 容器内路径: ro rw 改变读写权限
ro #readonly 只读
rw #readwrite 可读可写

# 一旦这个设置了容器权限,容器对我们挂载出来的内容就有限定了!
[root@localhost home]# docker run -d -P --name nginx05 -v juming:/etc/nginx:ro nginx
[root@localhost home]# docker run -d -P --name nginx05 -v juming:/etc/nginx:rw nginx

# ro 只要看到ro就说明这个路径只能通过宿主机来操作,容器内部是无法操作!

初始Dockerfile

Dockerfile 就是用来构建docker镜像的构建文件!命令脚本!先体验一下!
通过这个脚本可以生成镜像,镜像是一层一层的,脚本是一个个的命令,每个命令都是一层!

[root@localhost /]# cd /home
[root@localhost home]# ls
ceshi  huang.java  huangjialin  huang.txt  mylinux  mysql  test.java  testTmp
[root@localhost home]# mkdir docker-test-volume
[root@localhost home]# ls
ceshi  docker-test-volume  huang.java  huangjialin  huang.txt  mylinux  mysql  test.java  testTmp
[root@localhost home]# cd docker-test-volume/
[root@localhost docker-test-volume]# pwd
/home/docker-test-volume

#通过这个脚本可以生成镜像
[root@localhost docker-test-volume]# vim dockerfile1	

# 创建一个dockerfile文件,名字可以随便 建议Dockerfile
# 文件中的内容: 指令(大写) + 参数
[root@localhost docker-test-volume]# vim dockerfile1
    FROM centos 					# 当前这个镜像是以centos为基础的

    VOLUME ["volume01","volume02"] 	# 挂载卷的卷目录列表(多个目录)

    CMD echo "-----end-----"		# 输出一下用于测试
    CMD /bin/bash					# 默认走bash控制台

[root@localhost docker-test-volume]# cat dockerfile1 
FROM centos

VOLUME ["volume01","volume02"]

CMD echo "-----end-----"
CMD /bin/bash


# 这里的每个命令,就是镜像的一层!
# 构建出这个镜像 
-f dockerfile1 			# f代表file,指这个当前文件的地址(这里是当前目录下的dockerfile1)
-t caoshipeng/centos 	# t就代表target,指目标目录(注意caoshipeng镜像名前不能加斜杠‘/’)
. 						# 表示生成在当前目录下

[root@localhost docker-test-volume]# docker build -f dockerfile1 -t huangjialin/centos:1.0 .
Sending build context to Docker daemon  2.048kB
Step 1/4 : FROM centos
 ---> 300e315adb2f
Step 2/4 : VOLUME ["volume01","volume02"] 			# 卷名列表
 ---> Running in 2d8ae11d9994
Removing intermediate container 2d8ae11d9994
 ---> 5893b2c78edd
Step 3/4 : CMD echo "-----end-----"					# 输出 脚本命令
 ---> Running in 2483f0e77b68
Removing intermediate container 2483f0e77b68
 ---> 30bf0ad14072
Step 4/4 : CMD /bin/bash
 ---> Running in 8fee073c961b
Removing intermediate container 8fee073c961b
 ---> 74f0e59c6da4
Successfully built 74f0e59c6da4
Successfully tagged huangjialin/centos:1.0

# 查看自己构建的镜像
[root@localhost docker-test-volume]# docker images
REPOSITORY            TAG       IMAGE ID       CREATED              SIZE
huangjialin/centos    1.0       74f0e59c6da4   About a minute ago   209MB

在这里插入图片描述

[root@localhost docker-test-volume]# docker run -it 74f0e59c6da4 /bin/bash
[root@0be9ef3e0f75 /]# ls -l
total 0
lrwxrwxrwx.   1 root root   7 Nov  3 15:22 bin -> usr/bin
drwxr-xr-x.   5 root root 360 Apr 12 14:38 dev
drwxr-xr-x.   1 root root  66 Apr 12 14:38 etc
drwxr-xr-x.   2 root root   6 Nov  3 15:22 home
lrwxrwxrwx.   1 root root   7 Nov  3 15:22 lib -> usr/lib
lrwxrwxrwx.   1 root root   9 Nov  3 15:22 lib64 -> usr/lib64
drwx------.   2 root root   6 Dec  4 17:37 lost+found
drwxr-xr-x.   2 root root   6 Nov  3 15:22 media
drwxr-xr-x.   2 root root   6 Nov  3 15:22 mnt
drwxr-xr-x.   2 root root   6 Nov  3 15:22 opt
dr-xr-xr-x. 253 root root   0 Apr 12 14:38 proc
dr-xr-x---.   2 root root 162 Dec  4 17:37 root
drwxr-xr-x.  11 root root 163 Dec  4 17:37 run
lrwxrwxrwx.   1 root root   8 Nov  3 15:22 sbin -> usr/sbin
drwxr-xr-x.   2 root root   6 Nov  3 15:22 srv
dr-xr-xr-x.  13 root root   0 Apr  7 10:09 sys
drwxrwxrwt.   7 root root 145 Dec  4 17:37 tmp
drwxr-xr-x.  12 root root 144 Dec  4 17:37 usr
drwxr-xr-x.  20 root root 262 Dec  4 17:37 var
drwxr-xr-x.   2 root root   6 Apr 12 14:38 volume01
drwxr-xr-x.   2 root root   6 Apr 12 14:38 volume02

在这里插入图片描述
这个卷和外部一定有一个同步的目录!

在这里插入图片描述

[root@0be9ef3e0f75 /]# exit
exit
[root@localhost docker-test-volume]# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED             STATUS             PORTS                   NAMES
330f5a75c399   nginx     "/docker-entrypoint.…"   54 minutes ago      Up 54 minutes      0.0.0.0:49154->80/tcp   nginx02
a35688cedc66   nginx     "/docker-entrypoint.…"   About an hour ago   Up About an hour   0.0.0.0:49153->80/tcp   nginx01
[root@localhost docker-test-volume]# docker run -it 74f0e59c6da4 /bin/bash
[root@f8bd9a180286 /]# cd volume01
[root@f8bd9a180286 volume01]# touch container.txt
[root@f8bd9a180286 volume01]# ls
container.txt

查看一下卷挂载的路径

# docker inspect 容器id
$ docker inspect ca3b45913df5

# 另一个窗口
[root@localhost home]# docker images
REPOSITORY            TAG       IMAGE ID       CREATED          SIZE
huangjialin/centos    1.0       74f0e59c6da4   18 minutes ago   209MB
tomcat02.1.0          latest    f5946bde7e99   5 hours ago      672MB
tomcat                8         926c7fd4777e   35 hours ago     533MB
tomcat                latest    bd431ca8553c   35 hours ago     667MB
nginx                 latest    519e12e2a84a   2 days ago       133MB
mysql                 5.7       450379344707   2 days ago       449MB
portainer/portainer   latest    580c0e4e98b0   3 weeks ago      79.1MB
hello-world           latest    d1165f221234   5 weeks ago      13.3kB
centos                latest    300e315adb2f   4 months ago     209MB
elasticsearch         7.6.2     f29a1ee41030   12 months ago    791MB
[root@localhost home]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED             STATUS             PORTS                   NAMES
f8bd9a180286   74f0e59c6da4   "/bin/bash"              3 minutes ago       Up 3 minutes                               gifted_mccarthy
330f5a75c399   nginx          "/docker-entrypoint.…"   59 minutes ago      Up 59 minutes      0.0.0.0:49154->80/tcp   nginx02
a35688cedc66   nginx          "/docker-entrypoint.…"   About an hour ago   Up About an hour   0.0.0.0:49153->80/tcp   nginx01
[root@localhost home]# docker inspect f8bd9a180286

在这里插入图片描述
测试一下刚才的文件是否同步出去了!

[root@localhost home]# cd /var/lib/docker/volumes/52912f4f30befa6a7039dc04b92de15a32ec5de39d3a9db3521f58212e85543d/_data
[root@localhost _data]# ls
container.txt

这种方式使用的十分多,因为我们通常会构建自己的镜像!
假设构建镜像时候没有挂载卷,要手动镜像挂载 -v 卷名:容器内路径!

数据卷容器

多个MySQL同步数据!

命名的容器挂载数据卷!

在这里插入图片描述

# 测试 启动3个容器,通过刚才自己写的镜像启动

[root@localhost _data]# docker images
REPOSITORY            TAG       IMAGE ID       CREATED         SIZE
huangjialin/centos    1.0       74f0e59c6da4   2 hours ago     209MB
tomcat02.1.0          latest    f5946bde7e99   6 hours ago     672MB
tomcat                8         926c7fd4777e   37 hours ago    533MB
tomcat                latest    bd431ca8553c   37 hours ago    667MB
nginx                 latest    519e12e2a84a   2 days ago      133MB
mysql                 5.7       450379344707   2 days ago      449MB
portainer/portainer   latest    580c0e4e98b0   3 weeks ago     79.1MB
hello-world           latest    d1165f221234   5 weeks ago     13.3kB
centos                latest    300e315adb2f   4 months ago    209MB
elasticsearch         7.6.2     f29a1ee41030   12 months ago   791MB



# 创建docker01:因为我本机是最新版,故这里用latest,狂神老师用的是1.0如下图
[root@localhost _data]#  docker run -it --name docker01 huangjialin/centos:1.0

# 查看容器docekr01内容
[root@8e2660858dca /]# ls
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var  volume01	volume02



# 不关闭该容器退出
CTRL + Q + P  

# 创建docker02: 并且让docker02 继承 docker01
[root@localhost /]# docker run -it --name docker02 --volumes-from docker01 huangjialin/centos:1.0

# 查看容器docker02内容
[root@465894e34af8 /]# ls
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var  volume01	volume02


# 进入docker01
[root@localhost /]# docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED         STATUS         PORTS     NAMES
465894e34af8   huangjialin/centos:1.0   "/bin/sh -c /bin/bash"   2 minutes ago   Up 2 minutes             docker02
8e2660858dca   huangjialin/centos:1.0   "/bin/sh -c /bin/bash"   3 minutes ago   Up 3 minutes             docker01
[root@localhost /]# docker attach 8e2660858dca
[root@8e2660858dca /]# ls -l
total 0
lrwxrwxrwx.   1 root root   7 Nov  3 15:22 bin -> usr/bin
drwxr-xr-x.   5 root root 360 Apr 13 05:47 dev
drwxr-xr-x.   1 root root  66 Apr 13 05:47 etc
drwxr-xr-x.   2 root root   6 Nov  3 15:22 home
lrwxrwxrwx.   1 root root   7 Nov  3 15:22 lib -> usr/lib
lrwxrwxrwx.   1 root root   9 Nov  3 15:22 lib64 -> usr/lib64
drwx------.   2 root root   6 Dec  4 17:37 lost+found
drwxr-xr-x.   2 root root   6 Nov  3 15:22 media
drwxr-xr-x.   2 root root   6 Nov  3 15:22 mnt
drwxr-xr-x.   2 root root   6 Nov  3 15:22 opt
dr-xr-xr-x. 242 root root   0 Apr 13 05:47 proc
dr-xr-x---.   2 root root 162 Dec  4 17:37 root
drwxr-xr-x.  11 root root 163 Dec  4 17:37 run
lrwxrwxrwx.   1 root root   8 Nov  3 15:22 sbin -> usr/sbin
drwxr-xr-x.   2 root root   6 Nov  3 15:22 srv
dr-xr-xr-x.  13 root root   0 Apr 13 05:27 sys
drwxrwxrwt.   7 root root 145 Dec  4 17:37 tmp
drwxr-xr-x.  12 root root 144 Dec  4 17:37 usr
drwxr-xr-x.  20 root root 262 Dec  4 17:37 var
drwxr-xr-x.   2 root root   6 Apr 13 05:47 volume01
drwxr-xr-x.   2 root root   6 Apr 13 05:47 volume02
# 创建文件
[root@8e2660858dca /]# cd volume01
[root@8e2660858dca volume01]# ls
[root@8e2660858dca volume01]# touch docker01

# 进入docker02
[root@localhost /]# docker attach 465894e34af8
[root@465894e34af8 /]# ls  -l
total 0
lrwxrwxrwx.   1 root root   7 Nov  3 15:22 bin -> usr/bin
drwxr-xr-x.   5 root root 360 Apr 13 05:48 dev
drwxr-xr-x.   1 root root  66 Apr 13 05:48 etc
drwxr-xr-x.   2 root root   6 Nov  3 15:22 home
lrwxrwxrwx.   1 root root   7 Nov  3 15:22 lib -> usr/lib
lrwxrwxrwx.   1 root root   9 Nov  3 15:22 lib64 -> usr/lib64
drwx------.   2 root root   6 Dec  4 17:37 lost+found
drwxr-xr-x.   2 root root   6 Nov  3 15:22 media
drwxr-xr-x.   2 root root   6 Nov  3 15:22 mnt
drwxr-xr-x.   2 root root   6 Nov  3 15:22 opt
dr-xr-xr-x. 241 root root   0 Apr 13 05:48 proc
dr-xr-x---.   2 root root 162 Dec  4 17:37 root
drwxr-xr-x.  11 root root 163 Dec  4 17:37 run
lrwxrwxrwx.   1 root root   8 Nov  3 15:22 sbin -> usr/sbin
drwxr-xr-x.   2 root root   6 Nov  3 15:22 srv
dr-xr-xr-x.  13 root root   0 Apr 13 05:27 sys
drwxrwxrwt.   7 root root 145 Dec  4 17:37 tmp
drwxr-xr-x.  12 root root 144 Dec  4 17:37 usr
drwxr-xr-x.  20 root root 262 Dec  4 17:37 var
drwxr-xr-x.   2 root root  22 Apr 13 05:55 volume01
drwxr-xr-x.   2 root root   6 Apr 13 05:47 volume02
# 发现docker01创建的文件
[root@465894e34af8 /]# cd volume01
[root@465894e34af8 volume01]# ls
docker01

在这里插入图片描述
在这里插入图片描述
各个容器之间的数据共享
亲测删除也会同步

# 再新建一个docker03同样继承docker01
$ docker run -it --name docker03 --volumes-from docker01 caoshipeng/centos:latest
$ cd volume01	#进入volume01 查看是否也同步docker01的数据
$ ls 
docker01.txt

# 测试:可以删除docker01,查看一下docker02和docker03是否可以访问这个文件
# 测试发现:数据依旧保留在docker02和docker03中没有被删除

在这里插入图片描述
多个mysql实现数据共享

$ docker run -d -p 3306:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql01 mysql:5.7

$ docker run -d -p 3310:3306 -e MYSQL_ROOT_PASSWORD=123456 --name mysql02 --volumes-from mysql01  mysql:5.7

# 这个时候,可以实现两个容器数据同步!

结论

容器之间的配置信息的传递,数据卷容器的生命周期一直持续到没有容器使用为止。

但是一旦你持久化到了本地,这个时候,本地的数据是不会删除的!

DockerFile

DockerFile介绍

dockerfile是用来构建docker镜像的文件!命令参数脚本!

构建步骤:

1、 编写一个dockerfile文件

2、 docker build 构建称为一个镜像

3、 docker run运行镜像

4、 docker push发布镜像(DockerHub 、阿里云仓库)

在这里插入图片描述
点击链接跳转到GitHub

在这里插入图片描述
很多官方镜像都是基础包,很多功能没有,我们通常会自己搭建自己的镜像!

官方既然可以制作镜像,那我们也可以!

DockerFile构建过程

基础知识:

1、每个保留关键字(指令)都是必须是大写字母

2、执行从上到下顺序

3、#表示注释

4、每一个指令都会创建提交一个新的镜像层,并提交!
在这里插入图片描述
Dockerfile是面向开发的,我们以后要发布项目,做镜像,就需要编写dockerfile文件,这个文件十分简单!

Docker镜像逐渐成企业交付的标准,必须要掌握!

步骤:开发、部署、运维缺一不可

DockerFile:构建文件,定义了一切的步骤,源代码

DockerImages:通过DockerFile构建生成的镜像,最终发布和运行产品。

Docker容器:容器就是镜像运行起来提供服务。

DockerFile的指令

FROM				# from:基础镜像,一切从这里开始构建
MAINTAINER			# maintainer:镜像是谁写的, 姓名+邮箱
RUN					# run:镜像构建的时候需要运行的命令
ADD					# add:步骤,tomcat镜像,这个tomcat压缩包!添加内容 添加同目录
WORKDIR				# workdir:镜像的工作目录
VOLUME				# volume:挂载的目录
EXPOSE				# expose:保留端口配置
CMD					# cmd:指定这个容器启动的时候要运行的命令,只有最后一个会生效,可被替代
ENTRYPOINT			# entrypoint:指定这个容器启动的时候要运行的命令,可以追加命令
ONBUILD				# onbuild:当构建一个被继承DockerFile这个时候就会运行onbuild的指令,触发指令
COPY				# copy:类似ADD,将我们文件拷贝到镜像中
ENV					# env:构建的时候设置环境变量!

在这里插入图片描述

实战测试

scratch 镜像

FROM scratch
ADD centos-7-x86_64-docker.tar.xz /

LABEL \
    org.label-schema.schema-version="1.0" \
    org.label-schema.name="CentOS Base Image" \
    org.label-schema.vendor="CentOS" \
    org.label-schema.license="GPLv2" \
    org.label-schema.build-date="20200504" \
    org.opencontainers.image.title="CentOS Base Image" \
    org.opencontainers.image.vendor="CentOS" \
    org.opencontainers.image.licenses="GPL-2.0-only" \
    org.opencontainers.image.created="2020-05-04 00:00:00+01:00"

CMD ["/bin/bash"]

Docker Hub 中 99%的镜像都是从这个基础镜像过来的 FROM scratch,然后配置需要的软件和配置来进行构建。

创建一个自己的centos

# 1./home下新建dockerfile目录
[root@localhost /]# cd home
[root@localhost home]# mkdir dockerfile

# 2. dockerfile目录下新建mydockerfile-centos文件
[root@localhost home]# cd dockerfile/

[root@localhost dockerfile]# vim mydockerfile-centos

# 3.编写Dockerfile配置文件
FROM centos							# 基础镜像是官方原生的centos
MAINTAINER huangjialin<2622046365@qq.com>	# 作者

ENV MYPATH /usr/local				# 配置环境变量的目录 
WORKDIR $MYPATH						# 将工作目录设置为 MYPATH

RUN yum -y install vim				# 给官方原生的centos 增加 vim指令
RUN yum -y install net-tools		# 给官方原生的centos 增加 ifconfig命令

EXPOSE 80							# 暴露端口号为80

CMD echo $MYPATH					# 输出下 MYPATH 路径
CMD echo "-----end----"				
CMD /bin/bash						# 启动后进入 /bin/bash

# 4.通过这个文件构建镜像
# 命令: docker build -f 文件路径 -t 镜像名:[tag] .
[root@localhost dockerfile]# docker build -f mydockerfile-centos -t mysentos:0.1 .

# 5.出现下图后则构建成功
Successfully built 2315251fefdd
Successfully tagged mysentos:0.1

在这里插入图片描述

[root@localhost home]# docker images
REPOSITORY            TAG       IMAGE ID       CREATED          SIZE
mysentos              0.1       2315251fefdd   52 seconds ago   291MB
huangjialin/centos    1.0       74f0e59c6da4   18 hours ago     209MB
tomcat02.1.0          latest    f5946bde7e99   22 hours ago     672MB
tomcat                8         926c7fd4777e   2 days ago       533MB
tomcat                latest    bd431ca8553c   2 days ago       667MB
nginx                 latest    519e12e2a84a   3 days ago       133MB
mysql                 5.7       450379344707   3 days ago       449MB
portainer/portainer   latest    580c0e4e98b0   3 weeks ago      79.1MB
hello-world           latest    d1165f221234   5 weeks ago      13.3kB
centos                latest    300e315adb2f   4 months ago     209MB
elasticsearch         7.6.2     f29a1ee41030   12 months ago    791MB

# 6.测试运行
[root@localhost home]# docker run -it mysentos:0.1		# 注意带上版本号,否则每次都回去找最新版latest

[root@468db5f02628 local]# pwd	
/usr/local							# 与Dockerfile文件中 WORKDIR 设置的 MYPATH 一致
[root@468db5f02628 local]# vim								# vim 指令可以使用
[root@468db5f02628 local]# ifconfig     						# ifconfig 指令可以使用

# docker history 镜像id 查看镜像构建历史步骤
[root@localhost home]# docker history 2315251fefdd
IMAGE          CREATED         CREATED BY                                      SIZE      COMMENT
2315251fefdd   6 minutes ago   /bin/sh -c #(nop)  CMD ["/bin/sh" "-c" "/bin…   0B        
be99d496b25c   6 minutes ago   /bin/sh -c #(nop)  CMD ["/bin/sh" "-c" "echo…   0B        
34fd97cceb13   6 minutes ago   /bin/sh -c #(nop)  CMD ["/bin/sh" "-c" "echo…   0B        
baa95b2530ae   6 minutes ago   /bin/sh -c #(nop)  EXPOSE 80                    0B        
ef9b50de0b5b   6 minutes ago   /bin/sh -c yum -y install net-tools             23.4MB    
071ef8153ad4   6 minutes ago   /bin/sh -c yum -y install vim                   58.1MB    
d6a38a2948f3   7 minutes ago   /bin/sh -c #(nop) WORKDIR /usr/local            0B        
98bd16198bff   7 minutes ago   /bin/sh -c #(nop)  ENV MYPATH=/usr/local        0B        
53816b10cd26   7 minutes ago   /bin/sh -c #(nop)  MAINTAINER huangjialin<26…   0B        
300e315adb2f   4 months ago    /bin/sh -c #(nop)  CMD ["/bin/bash"]            0B        
<missing>      4 months ago    /bin/sh -c #(nop)  LABEL org.label-schema.sc…   0B        
<missing>      4 months ago    /bin/sh -c #(nop) ADD file:bd7a2aed6ede423b7…   209MB     


之前的

在这里插入图片描述

之后的

在这里插入图片描述

变更历史

我们平时拿到一个镜像,可以用 “docker history 镜像id” 研究一下是什么做的

CMD 和 ENTRYPOINT区别

CMD					# 指定这个容器启动的时候要运行的命令,只有最后一个会生效,可被替代。
ENTRYPOINT			# 指定这个容器启动的时候要运行的命令,可以追加命令

测试CMD

# 编写dockerfile文件
[root@localhost dockerfile]# vim dockerfile-test-cmd
FROM centos
CMD ["ls","-a"]					# 启动后执行 ls -a 命令

# CMD [] :要运行的命令是存放在一个数组结构中的。

# 构建镜像
[root@localhost dockerfile]# docker build -f dockerfile-test-cmd -t cmd-test:0.1 .
Sending build context to Docker daemon  3.072kB
Step 1/2 : FROM centos
 ---> 300e315adb2f
Step 2/2 : CMD ["ls","-a"]
 ---> Running in 3d990f51fd5a
Removing intermediate container 3d990f51fd5a
 ---> 0e927777d383
Successfully built 0e927777d383
Successfully tagged cmd-test:0.1

# 运行镜像
[root@localhost dockerfile]# docker run 0e927777d383		# 由结果可得,运行后就执行了 ls -a 命令
.
..
.dockerenv
bin
dev
etc
home
lib
lib64
lost+found
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var


# 想追加一个命令  -l 成为ls -al:展示列表详细数据
[root@localhost dockerfile]# docker run cmd-test:0.1 -l
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-l\":
executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled 

# cmd的情况下 -l 替换了CMD["ls","-l"] 而 -l  不是命令所以报错

[root@localhost dockerfile]# docker run 0e927777d383 ls -al
total 0
drwxr-xr-x.   1 root root   6 Apr 13 08:38 .
drwxr-xr-x.   1 root root   6 Apr 13 08:38 ..
-rwxr-xr-x.   1 root root   0 Apr 13 08:38 .dockerenv
lrwxrwxrwx.   1 root root   7 Nov  3 15:22 bin -> usr/bin
drwxr-xr-x.   5 root root 340 Apr 13 08:38 dev
drwxr-xr-x.   1 root root  66 Apr 13 08:38 etc
drwxr-xr-x.   2 root root   6 Nov  3 15:22 home
lrwxrwxrwx.   1 root root   7 Nov  3 15:22 lib -> usr/lib
lrwxrwxrwx.   1 root root   9 Nov  3 15:22 lib64 -> usr/lib64
drwx------.   2 root root   6 Dec  4 17:37 lost+found
drwxr-xr-x.   2 root root   6 Nov  3 15:22 media
drwxr-xr-x.   2 root root   6 Nov  3 15:22 mnt
drwxr-xr-x.   2 root root   6 Nov  3 15:22 opt
dr-xr-xr-x. 247 root root   0 Apr 13 08:38 proc
dr-xr-x---.   2 root root 162 Dec  4 17:37 root
drwxr-xr-x.  11 root root 163 Dec  4 17:37 run
lrwxrwxrwx.   1 root root   8 Nov  3 15:22 sbin -> usr/sbin
drwxr-xr-x.   2 root root   6 Nov  3 15:22 srv
dr-xr-xr-x.  13 root root   0 Apr 13 05:27 sys
drwxrwxrwt.   7 root root 145 Dec  4 17:37 tmp
drwxr-xr-x.  12 root root 144 Dec  4 17:37 usr
drwxr-xr-x.  20 root root 262 Dec  4 17:37 var

测试ENTRYPOINT

# 编写dockerfile文件
$ vim dockerfile-test-entrypoint
FROM centos
ENTRYPOINT ["ls","-a"]

# 构建镜像
$ docker build  -f dockerfile-test-entrypoint -t cmd-test:0.1 .

# 运行镜像
$ docker run entrypoint-test:0.1
.
..
.dockerenv
bin
dev
etc
home
lib
lib64
lost+found ...

# 我们的命令,是直接拼接在我们得ENTRYPOINT命令后面的
$ docker run entrypoint-test:0.1 -l
total 56
drwxr-xr-x   1 root root 4096 May 16 06:32 .
drwxr-xr-x   1 root root 4096 May 16 06:32 ..
-rwxr-xr-x   1 root root    0 May 16 06:32 .dockerenv
lrwxrwxrwx   1 root root    7 May 11  2019 bin -> usr/bin
drwxr-xr-x   5 root root  340 May 16 06:32 dev
drwxr-xr-x   1 root root 4096 May 16 06:32 etc
drwxr-xr-x   2 root root 4096 May 11  2019 home
lrwxrwxrwx   1 root root    7 May 11  2019 lib -> usr/lib
lrwxrwxrwx   1 root root    9 May 11  2019 lib64 -> usr/lib64 ....

Dockerfile中很多命令都十分的相似,我们需要了解它们的区别,我们最好的学习就是对比他们然后测试效果!

实战:Tomcat镜像

1、准备镜像文件

准备tomcat 和 jdk 到当前目录,编写好README

[root@localhost /]# cd /home
[root@localhost home]# ls
ceshi  dockerfile  docker-test-volume  huang.java  huangjialin  huang.txt  mylinux  mysql  test.java  testTmp
[root@localhost home]# cd huangjialin
[root@localhost huangjialin]# ls
datas  Desktop  Documents  Downloads  Music  Pictures  Public  Templates  Videos
# 创建目录
[root@localhost huangjialin]# mkdir -vp build/tomcat
mkdir: created directory ‘build’
mkdir: created directory ‘build/tomcat’
[root@localhost huangjialin]# ls
build  datas  Desktop  Documents  Downloads  Music  Pictures  Public  Templates  Videos
[root@localhost huangjialin]# cd build
[root@localhost build]# cd tomcat
# 根据下图导入数据包
[root@localhost tomcat]# ls
apache-tomcat-9.0.45.tar.gz  jdk-8u221-linux-x64.tar.gz

在这里插入图片描述
在这里插入图片描述

2、编写dokerfile

官方命名Dockerfile,build会自动寻找这个文件,就不需要-f指定了!

[root@localhost tomcat]# ls
apache-tomcat-9.0.45.tar.gz  jdk-8u221-linux-x64.tar.gz
[root@localhost tomcat]# touch readme.txt
[root@localhost tomcat]# vim Dockerfile

FROM centos 										# 基础镜像centos
MAINTAINER huangjialin<2622046365@qq.com>					# 作者
COPY readme.txt /usr/local/readme.txt						# 复制README文件
ADD jdk-8u221-linux-x64.tar.gz /usr/local/ 			# 添加jdk,ADD 命令会自动解压
ADD apache-tomcat-9.0.45.tar.gz /usr/local/ 		# 添加tomcat,ADD 命令会自动解压
RUN yum -y install vim								# 安装 vim 命令
ENV MYPATH /usr/local 								# 环境变量设置 工作目录
WORKDIR $MYPATH

ENV JAVA_HOME /usr/local/jdk1.8.0_221 				# 环境变量: JAVA_HOME环境变量
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.45 	# 环境变量: tomcat环境变量
ENV CATALINA_BASH /usr/local/apache-tomcat-9.0.45

# 设置环境变量 分隔符是:
ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin 	

EXPOSE 8080 										# 设置暴露的端口

CMD /usr/local/apache-tomcat-9.0.45/bin/startup.sh && tail -F /usr/local/apache-tomcat-9.0.45/logs/catalina.out 					# 设置默认命令


在这里插入图片描述

3、构建镜像

# 因为Dockerfile命名使用默认命名 因此不用使用-f 指定文件
[root@localhost tomcat]# docker build -t diytomcat .

运行成功

[root@localhost tomcat]# docker build -t diytomcat .
Sending build context to Docker daemon  206.6MB
Step 1/15 : FROM centos
 ---> 300e315adb2f
Step 2/15 : MAINTAINER huangjialin<2622046365@qq.com>
 ---> Using cache
 ---> 53816b10cd26
Step 3/15 : COPY readme.txt /usr/local/readme.txt
 ---> b17277a321e4
Step 4/15 : ADD jdk-8u221-linux-x64.tar.gz /usr/local/
 ---> bf206090dd5b
Step 5/15 : ADD apache-tomcat-9.0.45.tar.gz /usr/local/
 ---> e7e5d7cb0c43
Step 6/15 : RUN yum -y install vim
 ---> Running in 7984f5e786a8
CentOS Linux 8 - AppStream                      791 kB/s | 6.3 MB     00:08    
CentOS Linux 8 - BaseOS                         1.2 MB/s | 2.3 MB     00:01    
CentOS Linux 8 - Extras                          18 kB/s | 9.6 kB     00:00    
Last metadata expiration check: 0:00:01 ago on Tue Apr 13 11:41:15 2021.
Dependencies resolved.
================================================================================
 Package             Arch        Version                   Repository      Size
================================================================================
Installing:
 vim-enhanced        x86_64      2:8.0.1763-15.el8         appstream      1.4 M
Installing dependencies:
 gpm-libs            x86_64      1.20.7-15.el8             appstream       39 k
 vim-common          x86_64      2:8.0.1763-15.el8         appstream      6.3 M
 vim-filesystem      noarch      2:8.0.1763-15.el8         appstream       48 k
 which               x86_64      2.21-12.el8               baseos          49 k

Transaction Summary
================================================================================
Install  5 Packages

Total download size: 7.8 M
Installed size: 30 M
Downloading Packages:
(1/5): gpm-libs-1.20.7-15.el8.x86_64.rpm        187 kB/s |  39 kB     00:00    
(2/5): vim-filesystem-8.0.1763-15.el8.noarch.rp 402 kB/s |  48 kB     00:00    
(3/5): which-2.21-12.el8.x86_64.rpm             106 kB/s |  49 kB     00:00    
(4/5): vim-enhanced-8.0.1763-15.el8.x86_64.rpm  500 kB/s | 1.4 MB     00:02    
(5/5): vim-common-8.0.1763-15.el8.x86_64.rpm    599 kB/s | 6.3 MB     00:10    
--------------------------------------------------------------------------------
Total                                           672 kB/s | 7.8 MB     00:11     
warning: /var/cache/dnf/appstream-02e86d1c976ab532/packages/gpm-libs-1.20.7-15.el8.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 8483c65d: NOKEY
CentOS Linux 8 - AppStream                      107 kB/s | 1.6 kB     00:00    
Importing GPG key 0x8483C65D:
 Userid     : "CentOS (CentOS Official Signing Key) <security@centos.org>"
 Fingerprint: 99DB 70FA E1D7 CE22 7FB6 4882 05B5 55B3 8483 C65D
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1 
  Installing       : which-2.21-12.el8.x86_64                               1/5 
  Installing       : vim-filesystem-2:8.0.1763-15.el8.noarch                2/5 
  Installing       : vim-common-2:8.0.1763-15.el8.x86_64                    3/5 
  Installing       : gpm-libs-1.20.7-15.el8.x86_64                          4/5 
  Running scriptlet: gpm-libs-1.20.7-15.el8.x86_64                          4/5 
  Installing       : vim-enhanced-2:8.0.1763-15.el8.x86_64                  5/5 
  Running scriptlet: vim-enhanced-2:8.0.1763-15.el8.x86_64                  5/5 
  Running scriptlet: vim-common-2:8.0.1763-15.el8.x86_64                    5/5 
  Verifying        : gpm-libs-1.20.7-15.el8.x86_64                          1/5 
  Verifying        : vim-common-2:8.0.1763-15.el8.x86_64                    2/5 
  Verifying        : vim-enhanced-2:8.0.1763-15.el8.x86_64                  3/5 
  Verifying        : vim-filesystem-2:8.0.1763-15.el8.noarch                4/5 
  Verifying        : which-2.21-12.el8.x86_64                               5/5 

Installed:
  gpm-libs-1.20.7-15.el8.x86_64         vim-common-2:8.0.1763-15.el8.x86_64    
  vim-enhanced-2:8.0.1763-15.el8.x86_64 vim-filesystem-2:8.0.1763-15.el8.noarch
  which-2.21-12.el8.x86_64             

Complete!
Removing intermediate container 7984f5e786a8
 ---> 5e55934a1698
Step 7/15 : ENV MYPATH /usr/local
 ---> Running in 1c27b1556650
Removing intermediate container 1c27b1556650
 ---> bf7676c461b7
Step 8/15 : WORKDIR $MYPATH
 ---> Running in ea7b376828ee
Removing intermediate container ea7b376828ee
 ---> 829e13be0dd2
Step 9/15 : ENV JAVA_HOME /usr/local/jdk1.8.0_221
 ---> Running in e870347689db
Removing intermediate container e870347689db
 ---> 9addfa9f2af9
Step 10/15 : ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
 ---> Running in 35be0d0d1e87
Removing intermediate container 35be0d0d1e87
 ---> 2b8399db457f
Step 11/15 : ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.45
 ---> Running in 6268510dbb10
Removing intermediate container 6268510dbb10
 ---> f587d0e0d7b1
Step 12/15 : ENV CATALINA_BASH /usr/local/apache-tomcat-9.0.45
 ---> Running in 584842f44cb2
Removing intermediate container 584842f44cb2
 ---> 9352b4a38800
Step 13/15 : ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin
 ---> Running in 3882c79944b1
Removing intermediate container 3882c79944b1
 ---> 2189e809aa16
Step 14/15 : EXPOSE 8080
 ---> Running in 2fdcc0bb062d
Removing intermediate container 2fdcc0bb062d
 ---> 2ada2912e565
Step 15/15 : CMD /usr/local/apache-tomcat-9.0.45/bin/startup.sh && tail -F /usr/local/apache-tomcat-9.0.45/logs/catalina.out
 ---> Running in b5253d274a87
Removing intermediate container b5253d274a87
 ---> 62c96b0fe815
Successfully built 62c96b0fe815
Successfully tagged diytomcat:latest

4、run镜像

# -d:后台运行 -p:暴露端口 --name:别名 -v:绑定路径 
[root@localhost tomcat]# docker run -d -p 9090:8080 --name diytomcat -v /home/huangjialin/build/tomcat/test:/usr/local/apache-tomcat-9.0.45/webapps/test -v /home/huangjialin/build/tomcat/tomcatlogs/:/usr/local/apache-tomcat-9.0.45/logs diytomcat
eefdce2320da80a26d04951fa62ead9155260c9ec2fc7195fb5d78332e70b20e
[root@localhost tomcat]# docker exec -it eefdce2320da80a2 /bin/bash
[root@eefdce2320da local]# ls
apache-tomcat-9.0.45  bin  etc	games  include	jdk1.8.0_221  lib  lib64  libexec  readme.txt  sbin  share  src
[root@eefdce2320da local]# pwd
/usr/local
[root@eefdce2320da local]# ls -l
total 0
drwxr-xr-x. 1 root root  45 Apr 13 11:40 apache-tomcat-9.0.45
drwxr-xr-x. 2 root root   6 Nov  3 15:22 bin
drwxr-xr-x. 2 root root   6 Nov  3 15:22 etc
drwxr-xr-x. 2 root root   6 Nov  3 15:22 games
drwxr-xr-x. 2 root root   6 Nov  3 15:22 include
drwxr-xr-x. 7   10  143 245 Jul  4  2019 jdk1.8.0_221
drwxr-xr-x. 2 root root   6 Nov  3 15:22 lib
drwxr-xr-x. 3 root root  17 Dec  4 17:37 lib64
drwxr-xr-x. 2 root root   6 Nov  3 15:22 libexec
-rw-r--r--. 1 root root   0 Apr 13 11:24 readme.txt
drwxr-xr-x. 2 root root   6 Nov  3 15:22 sbin
drwxr-xr-x. 5 root root  49 Dec  4 17:37 share
drwxr-xr-x. 2 root root   6 Nov  3 15:22 src
[root@eefdce2320da local]# cd apache-tomcat-9.0.45/
[root@eefdce2320da apache-tomcat-9.0.45]# ls
BUILDING.txt  CONTRIBUTING.md  LICENSE	NOTICE	README.md  RELEASE-NOTES  RUNNING.txt  bin  conf  lib  logs  temp  webapps  work
[root@eefdce2320da apache-tomcat-9.0.45]# ls -l
total 128
-rw-r-----. 1 root root 18984 Mar 30 10:29 BUILDING.txt
-rw-r-----. 1 root root  5587 Mar 30 10:29 CONTRIBUTING.md
-rw-r-----. 1 root root 57092 Mar 30 10:29 LICENSE
-rw-r-----. 1 root root  2333 Mar 30 10:29 NOTICE
-rw-r-----. 1 root root  3257 Mar 30 10:29 README.md
-rw-r-----. 1 root root  6898 Mar 30 10:29 RELEASE-NOTES
-rw-r-----. 1 root root 16507 Mar 30 10:29 RUNNING.txt
drwxr-x---. 2 root root  4096 Mar 30 10:29 bin
drwx------. 1 root root    22 Apr 13 12:34 conf
drwxr-x---. 2 root root  4096 Mar 30 10:29 lib
drwxr-xr-x. 2 root root   197 Apr 13 12:34 logs
drwxr-x---. 2 root root    30 Mar 30 10:29 temp
drwxr-x---. 1 root root    18 Apr 13 12:34 webapps
drwxr-x---. 1 root root    22 Apr 13 12:34 work


# 另一个窗口
[root@localhost tomcat]# curl localhost:9090
<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="UTF-8" />
        <title>Apache Tomcat/9.0.45</title>
        <link href="favicon.ico" rel="icon" type="image/x-icon" />
        <link href="tomcat.css" rel="stylesheet" type="text/css" />
    </head>
...

5、访问测试

[root@localhost tomcat]# docker exec -it 自定义容器的id /bin/bash

[root@localhost tomcat]# cul localhost:9090

在这里插入图片描述

6、发布项目

(由于做了卷挂载,我们直接在本地编写项目就可以发布了!)

[root@localhost tomcat]# ls
apache-tomcat-9.0.45.tar.gz  Dockerfile  jdk-8u221-linux-x64.tar.gz  readme.txt  test  tomcatlogs
[root@localhost tomcat]# cd test/
[root@localhost test]# pwd
/home/huangjialin/build/tomcat/test
[root@localhost test]# mkdir WEB-INF
[root@localhost test]# ls
WEB-INF
[root@localhost test]# cd WEB-INF/
[root@localhost WEB-INF]# vim web.xml
[root@localhost WEB-INF]# cd ../
[root@localhost test]# vim index.jsp

参考链接 https://segmentfault.com/a/1190000011404088

web.xml

  <?xml version="3.0" encoding="UTF-8"?>
  <web-app xmlns="http://java.sun.com/xml/ns/javaee"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
                               http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
           version="3.0">

  </web-app>

参考链接 https://www.runoob.com/jsp/jsp-syntax.html

index.jsp

<%@ page language="java" contentType="text/html; charset=UTF-8"
    pageEncoding="UTF-8"%>
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>huangjialin-菜鸟教程(runoob.com)</title>
</head>
<body>
Hello World!<br/>
<%
System.out.println("你的 IP 地址 " + request.getRemoteAddr());
%>
</body>
</html>

在这里插入图片描述
发现:项目部署成功,可以直接访问!

我们以后开发的步骤:需要掌握Dockerfile的编写!我们之后的一切都是使用docker镜像来发布运行!

发布自己的镜像

发布到 Docker Hub

1、地址 https://hub.docker.com/

2、确定这个账号可以登录

3、登录

在这里插入图片描述
在这里插入图片描述

[root@localhost tomcat]# docker login --help
Usage:  docker login [OPTIONS] [SERVER]

Log in to a Docker registry.
If no server is specified, the default is defined by the daemon.

Options:
  -p, --password string   Password
      --password-stdin    Take the password from stdin
  -u, --username string   Username

[root@localhost tomcat]# docker login -u 你的用户名 -p 你的密码

[root@localhost tomcat]# docker login -u yanghuihui520
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

在这里插入图片描述
4、提交 push镜像
在这里插入图片描述

# 会发现push不上去,因为如果没有前缀的话默认是push到 官方的library
# 解决方法:
# 第一种 build的时候添加你的dockerhub用户名,然后在push就可以放到自己的仓库了
[root@localhost tomcat]# docker build -t kuangshen/mytomcat:0.1 .

# 第二种 使用docker tag #然后再次push
[root@localhost tomcat]# docker tag 容器id kuangshen/mytomcat:1.0
#然后再次push
[root@localhost tomcat]# docker push kuangshen/mytomcat:1.0

必须对应账号名正解

[root@localhost tomcat]# docker tag 62c96b0fe815 yanghuihui520/tomcat:1.0
[root@localhost tomcat]# docker images
REPOSITORY             TAG       IMAGE ID       CREATED         SIZE
huangjialin/tomcat     1.0       62c96b0fe815   3 hours ago     690MB
。。。
[root@localhost tomcat]# docker push yanghuihui520/tomcat:1.0
The push refers to repository [docker.io/yanghuihui520/tomcat]
a7b8bb209ca2: Pushing [==========>                                        ]  12.47MB/58.05MB
5ba6c0e6c8ff: Pushing [==========>                                        ]  3.186MB/15.9MB
b3f3595d4705: Pushing [>                                                  ]  3.277MB/406.7MB
50c4c2648dca: Layer already exists 
2653d992f4ef: Pushing [=>                                                 ]  7.676MB/209.3MB



在这里插入图片描述

发布到 阿里云镜像服务上

看官网 很详细https://cr.console.aliyun.com/repository/

$ sudo docker login --username=zchengx registry.cn-shenzhen.aliyuncs.com
$ sudo docker tag [ImageId] registry.cn-shenzhen.aliyuncs.com/dsadxzc/cheng:[镜像版本号]

# 修改id 和 版本
sudo docker tag a5ef1f32aaae registry.cn-shenzhen.aliyuncs.com/dsadxzc/cheng:1.0
# 修改版本
$ sudo docker push registry.cn-shenzhen.aliyuncs.com/dsadxzc/cheng:[镜像版本号]

小结

在这里插入图片描述

Docker 网络

理解Docker0

学习之前清空下前面的docker 镜像、容器

# 删除全部容器
$ docker rm -f $(docker ps -aq)

# 删除全部镜像
$ docker rmi -f $(docker images -aq)

测试
在这里插入图片描述

[root@localhost tomcat]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:1a:80:de brd ff:ff:ff:ff:ff:ff
    inet 192.168.254.133/24 brd 192.168.254.255 scope global noprefixroute dynamic ens33
       valid_lft 1365sec preferred_lft 1365sec
    inet6 fe80::78ba:483e:9794:f6c2/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:a2:05:5a brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:a2:05:5a brd ff:ff:ff:ff:ff:ff
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:2a:09:31:67 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:2aff:fe09:3167/64 scope link 
       valid_lft forever preferred_lft forever

三个网络

问题: docker 是如果处理容器网络访问的?

在这里插入图片描述

# 测试  运行一个tomcat
[root@localhost tomcat]# docker run -d -P --name tomcat01 tomcat

# 查看容器内部网络地址
[root@localhost tomcat]# docker exec -it 容器id ip add

# 发现容器启动的时候会得到一个 eth0@if91 ip地址,docker分配!
[root@localhost tomcat]# docker exec -it tomcat01 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
36: eth0@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

       
# 思考? linux能不能ping通容器内部! 可以 容器内部可以ping通外界吗? 可以!
[root@localhost tomcat]# ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=19.5 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.105 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.051 ms
64 bytes from 172.17.0.2: icmp_seq=4 ttl=64 time=0.071 ms
64 bytes from 172.17.0.2: icmp_seq=5 ttl=64 time=0.124 ms
64 bytes from 172.17.0.2: icmp_seq=6 ttl=64 time=0.052 ms
。。。

原理

1、我们每启动一个docker容器,docker就会给docker容器分配一个ip,我们只要按照了docker,就会有一个docker0桥接模式,使用的技术是veth-pair技术!

https://www.cnblogs.com/bakari/p/10613710.html

再次测试 ip addr

# 多了一个网卡
[root@localhost tomcat]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:1a:80:de brd ff:ff:ff:ff:ff:ff
    inet 192.168.254.133/24 brd 192.168.254.255 scope global noprefixroute dynamic ens33
       valid_lft 1499sec preferred_lft 1499sec
    inet6 fe80::78ba:483e:9794:f6c2/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:a2:05:5a brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:a2:05:5a brd ff:ff:ff:ff:ff:ff
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:2a:09:31:67 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:2aff:fe09:3167/64 scope link 
       valid_lft forever preferred_lft forever
37: veth8f89171@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether f2:72:f8:7d:6e:7b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::f072:f8ff:fe7d:6e7b/64 scope link 
       valid_lft forever preferred_lft forever

在这里插入图片描述
2 、再启动一个容器测试,发现又多了一对网络

[root@localhost tomcat]# docker run -d -P --name tomcat02 tomcat
781895f439c26dfd5fd489bf1316ab52d0d747d7a5c4f214656ea8ab9bc7d760
[root@localhost tomcat]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:1a:80:de brd ff:ff:ff:ff:ff:ff
    inet 192.168.254.133/24 brd 192.168.254.255 scope global noprefixroute dynamic ens33
       valid_lft 1299sec preferred_lft 1299sec
    inet6 fe80::78ba:483e:9794:f6c2/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:a2:05:5a brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:a2:05:5a brd ff:ff:ff:ff:ff:ff
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:2a:09:31:67 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:2aff:fe09:3167/64 scope link 
       valid_lft forever preferred_lft forever
37: veth8f89171@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether f2:72:f8:7d:6e:7b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::f072:f8ff:fe7d:6e7b/64 scope link 
       valid_lft forever preferred_lft forever
39: veth701a9f4@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 0a:c9:02:a1:57:a9 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::8c9:2ff:fea1:57a9/64 scope link 
       valid_lft forever preferred_lft forever
#查看tomcat02容器地址
[root@localhost tomcat]# docker exec -it tomcat02 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
38: eth0@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

在这里插入图片描述

# 我们发现这个容器带来网卡,都是一对对的
# veth-pair 就是一对的虚拟设备接口,他们都是成对出现的,一端连着协议,一端彼此相连
# 正因为有这个特性 veth-pair 充当一个桥梁,连接各种虚拟网络设备的
# OpenStac,Docker容器之间的连接,OVS的连接,都是使用veth-pair技术

3、我们来测试下tomcat01和tomcat02是否可以ping通

# 获取tomcat01的ip 172.17.0.2
[root@localhost tomcat]# docker exec -it tomcat01 ip addr  
550: eth0@if551: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
       
# 让tomcat02 ping tomcat01       
[root@localhost tomcat]# docker exec -it tomcat02 ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=9.07 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.145 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.153 ms
64 bytes from 172.17.0.2: icmp_seq=4 ttl=64 time=0.104 ms


# 结论:容器和容器之间是可以互相ping通

网络模型图

在这里插入图片描述

结论:tomcat01tomcat02公用一个路由器,docker0

所有的容器不指定网络的情况下,都是docker0路由的,docker会给我们的容器分配一个默认的可用ip。

小结

Docker使用的是Linux的桥接,宿主机是一个Docker容器的网桥 docker0
在这里插入图片描述
Docker中所有网络接口都是虚拟的,虚拟的转发效率高(内网传递文件)
只要容器删除,对应的网桥一对就没了!

思考一个场景:我们编写了一个微服务,database url=ip: 项目不重启,数据ip换了,我们希望可以处理这个问题,可以通过名字来进行访问容器?

在这里插入图片描述

–link

[root@localhost tomcat]# docker exec -it tomcat02 ping tomca01   # ping不通
ping: tomca01: Name or service not known

# 运行一个tomcat03 --link tomcat02 
[root@localhost tomcat]# docker run -d -P --name tomcat03 --link tomcat02 tomcat
5f9331566980a9e92bc54681caaac14e9fc993f14ad13d98534026c08c0a9aef

# 3连接2
# 用tomcat03 ping tomcat02 可以ping通
[root@localhost tomcat]# docker exec -it tomcat03 ping tomcat02
PING tomcat02 (172.17.0.3) 56(84) bytes of data.
64 bytes from tomcat02 (172.17.0.3): icmp_seq=1 ttl=64 time=12.6 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=2 ttl=64 time=1.07 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=3 ttl=64 time=0.365 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=4 ttl=64 time=0.185 ms
64 bytes from tomcat02 (172.17.0.3): icmp_seq=5 ttl=64 time=0.169 ms


# 2连接3
# 用tomcat02 ping tomcat03 ping不通

探究:

# docker network inspect 网络id 网段相同
[root@localhost tomcat]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
4b16ee2d6925   bridge    bridge    local
eedd07789e82   host      host      local
48629cdb554a   none      null      local
[root@localhost tomcat]# docker network inspect 4b16ee2d6925
[
    {
        "Name": "bridge",
        "Id": "4b16ee2d6925c9a209b44a0451b7b51c12b530d748cc1162e5184ed12e279de6",
        "Created": "2021-04-12T22:31:55.809338504-07:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
...

在这里插入图片描述

在这里插入图片描述

# docker inspect tomcat03

[root@localhost tomcat]# docker inspect tomcat03 | grep tomcat02
                "/tomcat02:/tomcat03/tomcat02"


在这里插入图片描述

查看tomcat03里面的/etc/hosts发现有tomcat02的配置

[root@localhost tomcat]# docker exec -it tomcat03 cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
172.17.0.3	tomcat02 781895f439c2
172.17.0.4	2f3a758730ba

--link 本质就是在hosts配置中添加映射

现在使用Docker已经不建议使用--link了!

自定义网络,不适用docker0

docker0问题:不支持容器名连接访问!

自定义网络

docker network
connect     -- Connect a container to a network
create      -- Creates a new network with a name specified by the
disconnect  -- Disconnects a container from a network
inspect     -- Displays detailed information on a network
ls          -- Lists all the networks created by the user
prune       -- Remove all unused networks
rm          -- Deletes one or more networks

查看所有的docker网络

[root@localhost tomcat]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
4b16ee2d6925   bridge    bridge    local
eedd07789e82   host      host      local
48629cdb554a   none      null      local

网络模式

bridge :桥接 docker(默认,自己创建也是用bridge模式)

none :不配置网络,一般不用

host :和所主机共享网络

container :容器网络连通(用得少!局限很大)

# 我们直接启动的命令 --net bridge,而这个就是我们得docker0
# bridge就是docker0
[root@localhost tomcat]# docker run -d -P --name tomcat01 tomcat
等价于 => docker run -d -P --name tomcat01 --net bridge tomcat

# docker0,特点:默认,域名不能访问。 --link可以打通连接,但是很麻烦!
# 我们可以 自定义一个网络
[root@localhost tomcat]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
9b2a5d2f9fe7f9b9d5c68c20b259fba68e4c510524ade5d0c3afa353a731e92a
[root@localhost tomcat]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
4b16ee2d6925   bridge    bridge    local
eedd07789e82   host      host      local
9b2a5d2f9fe7   mynet     bridge    local
48629cdb554a   none      null      local


自己的网络创建完成

[root@localhost tomcat]# docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "9b2a5d2f9fe7f9b9d5c68c20b259fba68e4c510524ade5d0c3afa353a731e92a",
        "Created": "2021-04-13T22:11:59.671350133-07:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

启动两个tomcat,再次查看网络情况

[root@localhost tomcat]# docker run -d -P --name tomcat-net-01 --net mynet tomcat
5d0fdad2b907b147c75a10420c172c43f87896db0b82bf9d4b34c889ea43647d
[root@localhost tomcat]# docker run -d -P --name tomcat-net-02 --net mynet tomcat
ac6b288e62f1a8cc764bbdc312b3811936b34fd465a70974d24b67fab7a11566

再次查看自己的网络

[root@localhost tomcat]# docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "9b2a5d2f9fe7f9b9d5c68c20b259fba68e4c510524ade5d0c3afa353a731e92a",
        "Created": "2021-04-13T22:11:59.671350133-07:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "5d0fdad2b907b147c75a10420c172c43f87896db0b82bf9d4b34c889ea43647d": {
                "Name": "tomcat-net-01",
                "EndpointID": "0cde983fd3f3d3fee2630379d0f149c6e9bd851bd1623f64d08a441e3372606e",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            },
            "ac6b288e62f1a8cc764bbdc312b3811936b34fd465a70974d24b67fab7a11566": {
                "Name": "tomcat-net-02",
                "EndpointID": "6119e2fb7bf569b8790af0158710d0bd4db9a2ec4fe63c927c2e28d06c5a4b3c",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

在这里插入图片描述
再次测试ping链接

[root@localhost tomcat]# docker exec -it tomcat-net-01 ping 192.168.0.3
PING 192.168.0.3 (192.168.0.3) 56(84) bytes of data.
64 bytes from 192.168.0.3: icmp_seq=1 ttl=64 time=0.177 ms
64 bytes from 192.168.0.3: icmp_seq=2 ttl=64 time=0.071 ms
64 bytes from 192.168.0.3: icmp_seq=3 ttl=64 time=0.084 ms
64 bytes from 192.168.0.3: icmp_seq=4 ttl=64 time=0.080 ms
64 bytes from 192.168.0.3: icmp_seq=5 ttl=64 time=0.089 ms
^C
--- 192.168.0.3 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 20ms
rtt min/avg/max/mdev = 0.071/0.100/0.177/0.039 ms
[root@localhost tomcat]# docker exec -it tomcat-net-01 ping tomcat-net-02
PING tomcat-net-02 (192.168.0.3) 56(84) bytes of data.
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=1 ttl=64 time=0.235 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=2 ttl=64 time=0.117 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=3 ttl=64 time=0.066 ms
64 bytes from tomcat-net-02.mynet (192.168.0.3): icmp_seq=4 ttl=64 time=0.111 ms
^C
--- tomcat-net-02 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 5ms
rtt min/avg/max/mdev = 0.066/0.132/0.235/0.063 ms

我们自定义的网络docker当我们维护好了对应的关系,推荐我们平时这样使用网络!

好处:

redis-不同的集群使用不同的网络,保证集群是安全和健康的

mysql-不同的集群使用不同的网络,保证集群是安全和健康的
在这里插入图片描述

网络连通

在这里插入图片描述

[root@localhost tomcat]# docker ps
CONTAINER ID   IMAGE     COMMAND             CREATED             STATUS             PORTS                     NAMES
ac6b288e62f1   tomcat    "catalina.sh run"   16 minutes ago      Up 16 minutes      0.0.0.0:49156->8080/tcp   tomcat-net-02
5d0fdad2b907   tomcat    "catalina.sh run"   16 minutes ago      Up 16 minutes      0.0.0.0:49155->8080/tcp   tomcat-net-01
2f3a758730ba   tomcat    "catalina.sh run"   About an hour ago   Up About an hour   0.0.0.0:49154->8080/tcp   tomcat03
781895f439c2   tomcat    "catalina.sh run"   About an hour ago   Up About an hour   0.0.0.0:49153->8080/tcp   tomcat02
4b0e289d67e1   tomcat    "catalina.sh run"   2 days ago          Up 2 hours         0.0.0.0:3355->8080/tcp    tomcat01
[root@localhost tomcat]# docker network --help

Usage:  docker network COMMAND

Manage networks

Commands:
  connect     Connect a container to a network
  create      Create a network
  disconnect  Disconnect a container from a network
  inspect     Display detailed information on one or more networks
  ls          List networks
  prune       Remove all unused networks
  rm          Remove one or more networks

Run 'docker network COMMAND --help' for more information on a command.

在这里插入图片描述

[root@localhost tomcat]# docker network connect --help

Usage:  docker network connect [OPTIONS] NETWORK CONTAINER

Connect a container to a network

Options:
      --alias strings           Add network-scoped alias for the container
      --driver-opt strings      driver options for the network
      --ip string               IPv4 address (e.g., 172.30.100.104)
      --ip6 string              IPv6 address (e.g., 2001:db8::33)
      --link list               Add link to another container
      --link-local-ip strings   Add a link-local address for the container

测试打通tomcat01mynet

[root@localhost tomcat]# docker network connect mynet tomcat01
[root@localhost tomcat]# docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "9b2a5d2f9fe7f9b9d5c68c20b259fba68e4c510524ade5d0c3afa353a731e92a",
        "Created": "2021-04-13T22:11:59.671350133-07:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "4b0e289d67e1837e9d0b0ee5d0c021cd5ec0c857883773bab5be9591979fd316": {
                "Name": "tomcat01",
                "EndpointID": "c5dc34f5cb12ce267461147c41a358c286c5ad231fe4557d7bd4c4276779e4be",
                "MacAddress": "02:42:c0:a8:00:04",
                "IPv4Address": "192.168.0.4/16",
                "IPv6Address": ""
            },
            "5d0fdad2b907b147c75a10420c172c43f87896db0b82bf9d4b34c889ea43647d": {
                "Name": "tomcat-net-01",
                "EndpointID": "0cde983fd3f3d3fee2630379d0f149c6e9bd851bd1623f64d08a441e3372606e",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            },
            "ac6b288e62f1a8cc764bbdc312b3811936b34fd465a70974d24b67fab7a11566": {
                "Name": "tomcat-net-02",
                "EndpointID": "6119e2fb7bf569b8790af0158710d0bd4db9a2ec4fe63c927c2e28d06c5a4b3c",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

在这里插入图片描述
连通之后就是将tomcat01放到mynet

一个容器两个ip地址

在这里插入图片描述
测试成功

[root@localhost tomcat]# docker exec -it tomcat01 ping tomcat-net-01
PING tomcat-net-01 (192.168.0.2) 56(84) bytes of data.
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=1 ttl=64 time=0.368 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=2 ttl=64 time=0.261 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=3 ttl=64 time=0.097 ms
64 bytes from tomcat-net-01.mynet (192.168.0.2): icmp_seq=4 ttl=64 time=0.070 ms

tomcat01 已经打通tomcat02 还没打通

结论:

假设要跨网络操作别人,就需要使用docker network connect 连通!

实战:部署Redis集群

在这里插入图片描述

前期准备

先停掉所有容器

[root@localhost tomcat]# docker ps
CONTAINER ID   IMAGE     COMMAND             CREATED       STATUS       PORTS                     NAMES
ac6b288e62f1   tomcat    "catalina.sh run"   2 hours ago   Up 2 hours   0.0.0.0:49156->8080/tcp   tomcat-net-02
5d0fdad2b907   tomcat    "catalina.sh run"   2 hours ago   Up 2 hours   0.0.0.0:49155->8080/tcp   tomcat-net-01
2f3a758730ba   tomcat    "catalina.sh run"   3 hours ago   Up 3 hours   0.0.0.0:49154->8080/tcp   tomcat03
781895f439c2   tomcat    "catalina.sh run"   3 hours ago   Up 3 hours   0.0.0.0:49153->8080/tcp   tomcat02
4b0e289d67e1   tomcat    "catalina.sh run"   2 days ago    Up 3 hours   0.0.0.0:3355->8080/tcp    tomcat01
[root@localhost tomcat]# docker rm -f $(docker ps -aq)
ac6b288e62f1
5d0fdad2b907
2f3a758730ba
781895f439c2
cc4db8053534
4d38084a7c0b
b77396e55db3
468db5f02628
8a02b6fddfe3
465894e34af8
8e2660858dca
f8bd9a180286
0be9ef3e0f75
330f5a75c399
a35688cedc66
5b1e64d8bbc0
7589b4d9d9a1
efbb2086dd82
9da019832a38
57700a38b6e2
4d5c745133e4
04c3a127d159
e846cffc9d72
3f00b72dfde0
4b0e289d67e1
ff130a73542d
872a5b63a024
1682649bdcf0
497d37843c8c
e7f19736f3db
56f47d6d2f36
[root@localhost tomcat]# docker images
REPOSITORY             TAG       IMAGE ID       CREATED         SIZE
yanghuihui520/tomcat   1.0       62c96b0fe815   19 hours ago    690MB
huangjialin/tomcat     1.0       62c96b0fe815   19 hours ago    690MB
diytomcat              latest    62c96b0fe815   19 hours ago    690MB
cmd-test               0.1       0e927777d383   22 hours ago    209MB
mysentos               0.1       2315251fefdd   23 hours ago    291MB
huangjialin/centos     1.0       74f0e59c6da4   40 hours ago    209MB
tomcat02.1.0           latest    f5946bde7e99   45 hours ago    672MB
tomcat                 8         926c7fd4777e   3 days ago      533MB
tomcat                 latest    bd431ca8553c   3 days ago      667MB
nginx                  latest    519e12e2a84a   3 days ago      133MB
mysql                  5.7       450379344707   4 days ago      449MB
portainer/portainer    latest    580c0e4e98b0   3 weeks ago     79.1MB
hello-world            latest    d1165f221234   5 weeks ago     13.3kB
centos                 latest    300e315adb2f   4 months ago    209MB
elasticsearch          7.6.2     f29a1ee41030   12 months ago   791MB
[root@localhost tomcat]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

设置redis集群网卡及查看

docker network create redis --subnet 172.38.0.0/16
docker network ls
docker network inspect redis


redis节点创建及设置

for port in $(seq 1 6);
do
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done 


拉取redis镜像并启动redis节点

# 通过脚本运行六个redis
for port in $(seq 1 6);\
docker run -p 637${port}:6379 -p 1667${port}:16379 --name redis-${port} \
-v /mydata/redis/node-${port}/data:/data \
-v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
docker exec -it redis-1 /bin/sh #redis默认没有bash
redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379  --cluster-replicas 1

//模板
docker run -p 637${port}:6379 -p 1667${port}:16379 --name redis-${port} \
-v /mydata/redis/node-${port}/data:/data \
-v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

//节点1
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
 -v /mydata/redis/node-1/data:/data \
 -v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
 -d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

//节点2
docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
 -v /mydata/redis/node-2/data:/data \
 -v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
 -d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

//节点3
docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
 -v /mydata/redis/node-3/data:/data \
 -v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
 -d --net redis --ip 172.38.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

//节点4
docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
 -v /mydata/redis/node-4data:/data \
 -v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
 -d --net redis --ip 172.38.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

//节点5
docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
 -v /mydata/redis/node-5/data:/data \
 -v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
 -d --net redis --ip 172.38.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

//节点6
docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
 -v /mydata/redis/node-6/data:/data \
 -v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
 -d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf


以交互模式进入redis节点内

docker exec -it redis-1 /bin/sh

创建redis集群

redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 \
172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 \
172.38.0.16:6379 --cluster-replicas 1

运行成功界面如下,(主从复制)集群创建成功

/data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 \
> 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 \
> 172.38.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: af21836f327964924ddfd99eadd0f86c4be2b998 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
M: b0cf655ea86ff16e2c6f9a82f3d9f1a3d7dc7c62 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
M: c040bae27420e3fe503d79cad47816123462262a 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
S: 24bf6f3296119d93f82232555784bbdb1a72697e 172.38.0.14:6379
   replicates c040bae27420e3fe503d79cad47816123462262a
S: 20a534198c68dd48b5a533b0ee4b1d447cc6b1f6 172.38.0.15:6379
   replicates af21836f327964924ddfd99eadd0f86c4be2b998
S: 65d83f48f4884e26dc986a60cc03aa33e22dd48a 172.38.0.16:6379
   replicates b0cf655ea86ff16e2c6f9a82f3d9f1a3d7dc7c62
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: af21836f327964924ddfd99eadd0f86c4be2b998 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 20a534198c68dd48b5a533b0ee4b1d447cc6b1f6 172.38.0.15:6379
   slots: (0 slots) slave
   replicates af21836f327964924ddfd99eadd0f86c4be2b998
S: 65d83f48f4884e26dc986a60cc03aa33e22dd48a 172.38.0.16:6379
   slots: (0 slots) slave
   replicates b0cf655ea86ff16e2c6f9a82f3d9f1a3d7dc7c62
S: 24bf6f3296119d93f82232555784bbdb1a72697e 172.38.0.14:6379
   slots: (0 slots) slave
   replicates c040bae27420e3fe503d79cad47816123462262a
M: b0cf655ea86ff16e2c6f9a82f3d9f1a3d7dc7c62 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: c040bae27420e3fe503d79cad47816123462262a 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

创建集群成功后,可以进一步查看集群配置,并进行测试

/data # redis-cli -c   //-c表示集群

127.0.0.1:6379> cluster info  //集群配置
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:613
cluster_stats_messages_pong_sent:614
cluster_stats_messages_sent:1227
cluster_stats_messages_ping_received:609
cluster_stats_messages_pong_received:613
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1227

127.0.0.1:6379> cluster nodes //集群节点信息
887c5ded66d075b6d29602f89a6adc7d1471d22c 172.38.0.11:6379@16379 myself,master - 0 1598439359000 1 connected 0-5460
e6b5521d86abc96fe2e51e40be8fbb1f23da9fe7 172.38.0.15:6379@16379 slave 887c5ded66d075b6d29602f89a6adc7d1471d22c 0 1598439359580 5 connected
d75a9db032f13d9484909b2d0d4724f44e3f1c23 172.38.0.14:6379@16379 slave db3caa7ba307a27a8ef30bf0b26ba91bfb89e932 0 1598439358578 4 connected
b6add5e06fd958045f90f29bcbbf219753798ef6 172.38.0.16:6379@16379 slave 7684dfd02929085817de59f334d241e6cbcd1e99 0 1598439358578 6 connected
7684dfd02929085817de59f334d241e6cbcd1e99 172.38.0.12:6379@16379 master - 0 1598439360082 2 connected 5461-10922
db3caa7ba307a27a8ef30bf0b26ba91bfb89e932 172.38.0.13:6379@16379 master - 0 1598439359079 3 connected 10923-16383

关于redis集群高可用的简单测试

127.0.0.1:6379> set msg "I Love YHH"	//设置值
-> Redirected to slot [6257] located at 172.38.0.12:6379
OK
172.38.0.12:6379> get msg	//取值
"I Love YHH"

//新开窗口
//用docker stop模拟存储msg值的redis主机宕机
[root@localhost /]# docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED          STATUS          PORTS                                              NAMES
430f29b7fdd2   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   12 minutes ago   Up 12 minutes   0.0.0.0:6376->6379/tcp, 0.0.0.0:16376->16379/tcp   redis-6
343b4d098652   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   12 minutes ago   Up 12 minutes   0.0.0.0:6375->6379/tcp, 0.0.0.0:16375->16379/tcp   redis-5
f2ce7d2db9ba   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   12 minutes ago   Up 12 minutes   0.0.0.0:6374->6379/tcp, 0.0.0.0:16374->16379/tcp   redis-4
7411b8f6c7d5   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   12 minutes ago   Up 12 minutes   0.0.0.0:6373->6379/tcp, 0.0.0.0:16373->16379/tcp   redis-3
b7a12d144758   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   12 minutes ago   Up 12 minutes   0.0.0.0:6372->6379/tcp, 0.0.0.0:16372->16379/tcp   redis-2
d7ce020b171e   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   13 minutes ago   Up 13 minutes   0.0.0.0:6371->6379/tcp, 0.0.0.0:16371->16379/tcp   redis-1

[root@localhost /]# docker stop redis-2
redis-2

//重新进入集群交互界面,并尝试获取msg消息
172.38.0.12:6379> get msg
Could not connect to Redis at 172.38.0.12:6379: Host is unreachable
(7.10s)
not connected> exit
/data # exit
[root@localhost conf]# docker exec -it redis-1 /bin/sh
/data # redis-cli -c
127.0.0.1:6379> get msg	//此时由备用机返回msg
-> Redirected to slot [6257] located at 172.38.0.16:6379
"I Love YHH"

节点信息

172.38.0.16:6379> cluster nodes
c040bae27420e3fe503d79cad47816123462262a 172.38.0.13:6379@16379 master - 0 1618385704146 3 connected 10923-16383
b0cf655ea86ff16e2c6f9a82f3d9f1a3d7dc7c62 172.38.0.12:6379@16379 master,fail - 1618385356051 1618385354000 2 connected
20a534198c68dd48b5a533b0ee4b1d447cc6b1f6 172.38.0.15:6379@16379 slave af21836f327964924ddfd99eadd0f86c4be2b998 0 1618385703038 5 connected
24bf6f3296119d93f82232555784bbdb1a72697e 172.38.0.14:6379@16379 slave c040bae27420e3fe503d79cad47816123462262a 0 1618385703139 4 connected
65d83f48f4884e26dc986a60cc03aa33e22dd48a 172.38.0.16:6379@16379 myself,master - 0 1618385702000 8 connected 5461-10922
af21836f327964924ddfd99eadd0f86c4be2b998 172.38.0.11:6379@16379 master - 0 1618385702000 1 connected 0-5460

docker搭建redis集群完成!
我们使用docker之后,所有的技术都会慢慢变得简单起来!

实际运行

[root@localhost tomcat]# docker network create redis --subnet 172.38.0.0/16
ab1607a9953d571bc29eb87c58a7a422b742622b79349421fe3b1d5d84d2c309
[root@localhost tomcat]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
4b16ee2d6925   bridge    bridge    local
eedd07789e82   host      host      local
9b2a5d2f9fe7   mynet     bridge    local
48629cdb554a   none      null      local
ab1607a9953d   redis     bridge    local
[root@localhost tomcat]# for port in $(seq 1 6);\
> do \
> mkdir -p /mydata/redis/node-${port}/conf
> touch /mydata/redis/node-${port}/conf/redis.conf
> cat << EOF >> /mydata/redis/node-${port}/conf/redis.conf
> port 6379
> bind 0.0.0.0
> cluster-enabled yes
> cluster-config-file nodes.conf
> cluster-node-timeout 5000
> cluster-announce-ip 172.38.0.1${port}
> cluster-announce-port 6379
> cluster-announce-bus-port 16379
> appendonly yes
> EOF
> done
[root@localhost tomcat]# cd /mydata/
[root@localhost mydata]# ls
redis
[root@localhost mydata]# cd redis/
[root@localhost redis]# ls
node-1  node-2  node-3  node-4  node-5  node-6
[root@localhost redis]# cd node-1
[root@localhost node-1]# ls
conf
[root@localhost node-1]# cd conf/
[root@localhost conf]# ls
redis.conf
[root@localhost conf]# cat redis.conf 
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.11
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes

[root@localhost conf]# docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
>  -v /mydata/redis/node-1/data:/data \
>  -v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
>  -d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
Unable to find image 'redis:5.0.9-alpine3.11' locally
5.0.9-alpine3.11: Pulling from library/redis
cbdbe7a5bc2a: Pull complete 
dc0373118a0d: Pull complete 
cfd369fe6256: Pull complete 
3e45770272d9: Pull complete 
558de8ea3153: Pull complete 
a2c652551612: Pull complete 
Digest: sha256:83a3af36d5e57f2901b4783c313720e5fa3ecf0424ba86ad9775e06a9a5e35d0
Status: Downloaded newer image for redis:5.0.9-alpine3.11
d7ce020b171e560c48948a714871f88a39ea8fb2d81aace566990a3174d9dee8
[root@localhost conf]# docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
>  -v /mydata/redis/node-2/data:/data \
>  -v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
>  -d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
b7a12d144758968a6124d5ac42bac61264f428a72588e942d4827737d71a64bd
[root@localhost conf]# docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
>  -v /mydata/redis/node-3/data:/data \
>  -v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
>  -d --net redis --ip 172.38.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
7411b8f6c7d5ab059941fb2a0a3f82ed5b02fb8f3554082f112e411019424dce
[root@localhost conf]# docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
>  -v /mydata/redis/node-4data:/data \
>  -v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
>  -d --net redis --ip 172.38.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
f2ce7d2db9ba756121715672fef55cfd4dfd92574cb1b1798dc91db0ae815ee5
[root@localhost conf]# docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
>  -v /mydata/redis/node-5/data:/data \
>  -v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
>  -d --net redis --ip 172.38.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
343b4d0986529851a89d5afb78bc523deb0e7b56d7709e30c4440746cd749211
[root@localhost conf]# docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
>  -v /mydata/redis/node-6/data:/data \
>  -v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
>  -d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
430f29b7fdd2ea2e11d08f8c5bbdf9f708b69f7ddcda86f6201acc63e7776746
[root@localhost conf]# docker ps
CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS              PORTS                                              NAMES
430f29b7fdd2   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:6376->6379/tcp, 0.0.0.0:16376->16379/tcp   redis-6
343b4d098652   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:6375->6379/tcp, 0.0.0.0:16375->16379/tcp   redis-5
f2ce7d2db9ba   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:6374->6379/tcp, 0.0.0.0:16374->16379/tcp   redis-4
7411b8f6c7d5   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:6373->6379/tcp, 0.0.0.0:16373->16379/tcp   redis-3
b7a12d144758   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:6372->6379/tcp, 0.0.0.0:16372->16379/tcp   redis-2
d7ce020b171e   redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   2 minutes ago        Up About a minute   0.0.0.0:6371->6379/tcp, 0.0.0.0:16371->16379/tcp   redis-1
[root@localhost conf]# docker exec -it redis-1 /bin/sh
/data # ls
appendonly.aof  nodes.conf
/data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 \
> 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 \
> 172.38.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: af21836f327964924ddfd99eadd0f86c4be2b998 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
M: b0cf655ea86ff16e2c6f9a82f3d9f1a3d7dc7c62 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
M: c040bae27420e3fe503d79cad47816123462262a 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
S: 24bf6f3296119d93f82232555784bbdb1a72697e 172.38.0.14:6379
   replicates c040bae27420e3fe503d79cad47816123462262a
S: 20a534198c68dd48b5a533b0ee4b1d447cc6b1f6 172.38.0.15:6379
   replicates af21836f327964924ddfd99eadd0f86c4be2b998
S: 65d83f48f4884e26dc986a60cc03aa33e22dd48a 172.38.0.16:6379
   replicates b0cf655ea86ff16e2c6f9a82f3d9f1a3d7dc7c62
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: af21836f327964924ddfd99eadd0f86c4be2b998 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 20a534198c68dd48b5a533b0ee4b1d447cc6b1f6 172.38.0.15:6379
   slots: (0 slots) slave
   replicates af21836f327964924ddfd99eadd0f86c4be2b998
S: 65d83f48f4884e26dc986a60cc03aa33e22dd48a 172.38.0.16:6379
   slots: (0 slots) slave
   replicates b0cf655ea86ff16e2c6f9a82f3d9f1a3d7dc7c62
S: 24bf6f3296119d93f82232555784bbdb1a72697e 172.38.0.14:6379
   slots: (0 slots) slave
   replicates c040bae27420e3fe503d79cad47816123462262a
M: b0cf655ea86ff16e2c6f9a82f3d9f1a3d7dc7c62 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: c040bae27420e3fe503d79cad47816123462262a 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
/data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:382
cluster_stats_messages_pong_sent:382
cluster_stats_messages_sent:764
cluster_stats_messages_ping_received:377
cluster_stats_messages_pong_received:382
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:764
127.0.0.1:6379> cluster nodes
20a534198c68dd48b5a533b0ee4b1d447cc6b1f6 172.38.0.15:6379@16379 slave af21836f327964924ddfd99eadd0f86c4be2b998 0 1618384814088 5 connected
65d83f48f4884e26dc986a60cc03aa33e22dd48a 172.38.0.16:6379@16379 slave b0cf655ea86ff16e2c6f9a82f3d9f1a3d7dc7c62 0 1618384814589 6 connected
af21836f327964924ddfd99eadd0f86c4be2b998 172.38.0.11:6379@16379 myself,master - 0 1618384814000 1 connected 0-5460
24bf6f3296119d93f82232555784bbdb1a72697e 172.38.0.14:6379@16379 slave c040bae27420e3fe503d79cad47816123462262a 0 1618384814000 4 connected
b0cf655ea86ff16e2c6f9a82f3d9f1a3d7dc7c62 172.38.0.12:6379@16379 master - 0 1618384815096 2 connected 5461-10922
c040bae27420e3fe503d79cad47816123462262a 172.38.0.13:6379@16379 master - 0 1618384814589 3 connected 10923-16383

# 后续接上面小结  关于redis集群高可用的简单测试

SpringBoot微服务打包Docker镜像

1、构建SpringBoot项目

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

配置零星

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

编写程序

在这里插入图片描述

package com.example.demo.controller;

import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class HelloController {

    @RequestMapping("/hello")
    public String hello(){
        return "hello,huangjialin";
    }
}

启动测试

http://127.0.0.1:8080/hello
在这里插入图片描述

2、打包运行

在这里插入图片描述
在这里插入图片描述

本地测试

在这里插入图片描述
找到文件地址
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
启动成功
在这里插入图片描述

安装插件

在这里插入图片描述
这里也可以链接远程仓库
在这里插入图片描述
在这里插入图片描述

3、编写dockerfile

FROM java:8
COPY *.jar /app.jar
CMD ["--server.port=8080"]
EXPOSE 8080
ENTRYPOINT ["java","-jar","app.jar"]

在这里插入图片描述

linux操作

停掉所有容器

172.38.0.16:6379> exit
/data # exit
[root@localhost conf]# docker rm -f $(docker ps -qa)
430f29b7fdd2
343b4d098652
f2ce7d2db9ba
7411b8f6c7d5
b7a12d144758
d7ce020b171e
[root@localhost conf]# 

新建目录idea

[root@localhost conf]# cd /home
[root@localhost home]# ls
ceshi  dockerfile  docker-test-volume  huang.java  huangjialin  huang.txt  mylinux  mysql  test.java  testTmp
[root@localhost home]# mkdir idea
[root@localhost home]# ls
ceshi  dockerfile  docker-test-volume  huang.java  huangjialin  huang.txt  idea  mylinux  mysql  test.java  testTmp
[root@localhost home]# cd idea/
[root@localhost idea]# ls

文件传输

在这里插入图片描述
上传成功

[root@localhost idea]# ls
demo-0.0.1-SNAPSHOT.jar  Dockerfile
[root@localhost idea]# ll
total 16672
-rw-r--r--. 1 root root 17064575 Apr 14 01:54 demo-0.0.1-SNAPSHOT.jar
-rw-r--r--. 1 root root      111 Apr 14 01:54 Dockerfile

4、构建镜像

[root@localhost idea]# docker build -t huangjialin666 .
Sending build context to Docker daemon  17.07MB
Step 1/5 : FROM java:8
8: Pulling from library/java
5040bd298390: Pull complete 
fce5728aad85: Pull complete 
76610ec20bf5: Pull complete 
60170fec2151: Pull complete 
e98f73de8f0d: Pull complete 
11f7af24ed9c: Pull complete 
49e2d6393f32: Pull complete 
bb9cdec9c7f3: Pull complete 
Digest: sha256:c1ff613e8ba25833d2e1940da0940c3824f03f802c449f3d1815a66b7f8c0e9d
Status: Downloaded newer image for java:8
 ---> d23bdf5b1b1b
Step 2/5 : COPY *.jar /app.jar
 ---> ca5673d370bc
Step 3/5 : CMD ["--server.port=8080"]
 ---> Running in a62e4ab48902
Removing intermediate container a62e4ab48902
 ---> f341490d9232
Step 4/5 : EXPOSE 8080
 ---> Running in 1fbd25817932
Removing intermediate container 1fbd25817932
 ---> 1aba2bcfb483
Step 5/5 : ENTRYPOINT ["java","-jar","app.jar"]
 ---> Running in 980468eb1893
Removing intermediate container 980468eb1893
 ---> 5285ac7f6cbc
Successfully built 5285ac7f6cbc
Successfully tagged huangjialin666:latest

[root@localhost idea]# docker images
REPOSITORY             TAG                IMAGE ID       CREATED          SIZE
huangjialin666         latest             5285ac7f6cbc   52 seconds ago   660MB

5、发布运行

[root@localhost idea]# docker run -d -P --name huangjialin-springboot-web huangjialin666
7621b33ce6b8d30c7c44362897f11cc4d9cbffc493852ed2adc8190848c56008
[root@localhost idea]# docker ps
CONTAINER ID   IMAGE            COMMAND                  CREATED          STATUS          PORTS                     NAMES
7621b33ce6b8   huangjialin666   "java -jar app.jar -…"   57 seconds ago   Up 49 seconds   0.0.0.0:49157->8080/tcp   huangjialin-springboot-web
[root@localhost idea]# curl 127.0.0.1:49157
{"timestamp":"2021-04-14T09:09:35.052+00:00","status":404,"error":"Not Found","message":"","path":"/"}[root@localhost idea]# 
[root@localhost idea]# curl 127.0.0.1:49157/hello
hello,huangjialin[root@localhost idea]# 

测试成功

在这里插入图片描述

以后我们使用了Docker之后,给别人交付就是一个镜像即可!

狂神UP

https://space.bilibili.com/95256449/video?keyword=Docker

标签:tomcat,精髓,redis,6379,localhost,Docker,root,docker,神学
来源: https://blog.csdn.net/qq_40649503/article/details/115639569