其他分享
首页 > 其他分享> > RHCA cl210 014 cloud-init注入脚本 文件系统中的差分盘 volume注入镜像 manila文件共享

RHCA cl210 014 cloud-init注入脚本 文件系统中的差分盘 volume注入镜像 manila文件共享

作者:互联网

cloud-init

拉取镜像

wget http://materials.example.com/osp-small.qcow2 -O ~/cloud-init

登录openstack

[student@workstation ~]$ source developer1-finance-rc 

上传镜像

[student@workstation ~(developer1-finance)]$ openstack image create --disk-format qcow2  --min-disk 10 --min-ram 2048 --file cloud-init cloud-init

写脚本

[student@workstation ~(developer1-finance)]$ cat user-data 
#!/bin/bash
yum -y install httpd
systemctl enable httpd
systemctl start httpd
echo hello mastermao > /var/www/html/index.html

发放云主机

[student@workstation ~(developer1-finance)]$ openstack server create --flavor default --key-name example-keypair --nic net-id=finance-network1 --security-group default --image cloud-init --user-data ~/user-data cloud-init 

创建浮动IP

[student@workstation ~(developer1-finance)]$ openstack floating ip create provider-datacentre

绑定浮动IP

[student@workstation ~(developer1-finance)]$ openstack server add floating ip cloud-init 172.25.250.109

[student@workstation ~(developer1-finance)]$ curl 172.25.250.109
hello mastermao

lab customization-img-cloudinit cleanup
清理环境

存储

osp13 所有数据存在ceph上
ceph
mon集群入口 (映射视图 块在哪,osd在哪。都可以查看)

[root@controller0 ~]# docker ps | grep ceph | grep mon
2f9557226ca4        172.25.249.200:8787/rhceph/rhceph-3-rhel7:latest                       "/entrypoint.sh"         37 minutes ago      Up 37 minutes                                       ceph-mon-controller0

mgr监控

[root@controller0 ~]# docker ps | grep ceph | grep mgr
5d24962174b8        172.25.249.200:8787/rhceph/rhceph-3-rhel7:latest                       "/entrypoint.sh"         37 minutes ago      Up 37 minutes                                        ceph-mgr-controller0

mds元数据服务

root@controller0 ~]# docker ps | grep ceph | grep mds
3a6bc8348b81        172.25.249.200:8787/rhceph/rhceph-3-rhel7:latest                       "/entrypoint.sh"         43 minutes ago      Up 43 minutes                                        ceph-mds-controller0

这些都运行在控制节点,openstack与ceph 融在一起

单独的ceph和超融合节点只运行了osd

[root@controller0 ~]# ceph -s
cluster:
    id:     fe8e3db0-d6c3-11e8-a76d-52540001fac8
    health: HEALTH_OK

services:
    mon: 1 daemons, quorum controller0
    mgr: controller0(active)
    mds: cephfs-1/1/1 up  {0=controller0=up:active}
    osd: 6 osds: 6 up, 6 in

data:
    pools:   7 pools, 416 pgs
    objects: 2118 objects, 12585 MB
    usage:   13210 MB used, 101 GB / 113 GB avail
    pgs:     416 active+clean

mon mgr mds 都运行在控制节点上

openstack 访问 ceph集群使用cephx(ceph独有)认证


红色块为对象存储
蓝色块提供不友好接口
RADOSGW提供友好的swift和s3接口
RBD提供块存储 (openstack 镜像或磁盘)
cephfs 文件系统

这是一条友好的命令

[student@workstation ~(developer1-finance)]$ lab storage-backend setup 

rpc error: code = 14 desc = grpc: the connection is unavailable
重启docker

ceph osd pool ls 除了manila其他都是块存储

images glance
metrics ceilometer:计量 监控 信息放在metrics
backups 虚拟机备份快照
vms 临时系统盘

[root@controller0 ~]# rbd -p vms ls
1bec428a-e0b4-4769-b2cd-d978ae011fb4_disk
6791a93b-cf6d-45eb-be1f-0eaf4fee334c_disk
[root@controller0 ~]# 

[student@workstation ~(developer1-finance)]$ nova list
+--------------------------------------+------------+--------+------------+-------------+----------------------------------------------+
| ID                                   | Name       | Status | Task State | Power State | Networks                                     |
+--------------------------------------+------------+--------+------------+-------------+----------------------------------------------+
| 6791a93b-cf6d-45eb-be1f-0eaf4fee334c | cloud-init | ACTIVE | -          | Running     | finance-network1=192.168.1.6, 172.25.250.109 |
+--------------------------------------+------------+--------+------------+-------------+----------------------------------------------+

679对应上
删除后临时磁盘跟着消失

[student@workstation ~(developer1-finance)]$ nova delete 6791a93b-cf6d-45eb-be1f-0eaf4fee334c
Request to delete server 6791a93b-cf6d-45eb-be1f-0eaf4fee334c has been accepted.
[root@controller0 ~]# rbd -p vms ls
1bec428a-e0b4-4769-b2cd-d978ae011fb4_disk

controller作为客户端得安装ceph命令,还得有密钥/etc/ceph
mon地址也在controller0上面

[root@controller0 ~]# cat /etc/ceph/ceph.conf 
# Please do not change this file directly since it is managed by Ansible and will be overwritten
[global]
cluster network = 172.24.4.0/24
fsid = fe8e3db0-d6c3-11e8-a76d-52540001fac8
log file = /dev/null
mon host = 172.24.3.1
此配置文件指定了mon地址为controller


[student@workstation ~(developer1-finance)]$ openstack image list
+--------------------------------------+-----------------+--------+
| ID                                   | Name            | Status |
+--------------------------------------+-----------------+--------+
| dc368824-b611-495a-b320-a3d33acef11c | cloud-init      | active |
| ec9473de-4048-4ebb-b08a-a9be619477ac | octavia-amphora | active |
| 6b0128a9-4481-4ceb-b34e-ffe92e0dcfdd | rhel7           | active |
| 5f7f8208-33b5-4f17-8297-588f938182c0 | rhel7-db        | active |
| 14b7e8b2-7c6d-4bcf-b159-1e4e7582107c | rhel7-web       | active |
| 75339809-dc32-4bfc-b6a9-5b8cddfed33f | small           | active |
+--------------------------------------+-----------------+--------+
[student@workstation ~(developer1-finance)]$ 

[root@controller0 ~]# rbd -p images ls
14b7e8b2-7c6d-4bcf-b159-1e4e7582107c
5f7f8208-33b5-4f17-8297-588f938182c0
6b0128a9-4481-4ceb-b34e-ffe92e0dcfdd
75339809-dc32-4bfc-b6a9-5b8cddfed33f
dc368824-b611-495a-b320-a3d33acef11c
ec9473de-4048-4ebb-b08a-a9be619477ac
[root@controller0 ~]# 

一一对应

glance对接 改个配置文件,给个密钥

[root@controller0 glance]# pwd
/var/lib/config-data/glance_api/etc/glance
[root@controller0 glance]# cat glance-api.conf (搜索rbd)

# Possible values:
#     * A comma separated list that could include:
#         * file
#         * http
#         * swift
#         * rbd
#         * sheepdog
#         * cinder
#         * vmware
#
# Related Options:
#     * default_store
#
#  (list value)
#stores = file,http
stores=http,rbd

(/var/lib/config-data/glance_api/etc/glance)目录是容器挂载出来的目录,修改此目录,就会更改容器里的glance服务

[root@controller0 glance]# egrep -v "^#|^$" glance-api.conf  | grep -A5 glance_store 
[glance_store]
stores=http,rbd
default_store=rbd    #默认rbd
rbd_store_pool=images  
rbd_store_user=openstack   #使用密钥用户
rbd_store_ceph_conf=/etc/ceph/ceph.conf   #配置文件

openstack用户相关权限
client.openstack
        key: AQAPHM9bAAAAABAA8DLv19H7QXzX0CnaTql/1w==
        caps: [mds] 
        caps: [mgr] allow * 
        caps: [mon] profile rbd    #可以操作rbd相关
        caps: [osd] profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics



[root@controller0 glance]# ll /etc/ceph/
total 28
-rw-------. 1 root root 159 Oct 23  2018 ceph.client.admin.keyring
-rw-------. 1  167  167 284 Oct 23  2018 ceph.client.manila.keyring
-rw-------. 1  167  167 276 Oct 23  2018 ceph.client.openstack.keyring
-rw-------. 1  167  167 157 Oct 23  2018 ceph.client.radosgw.keyring
-rw-r--r--. 1 root root 797 Oct 23  2018 ceph.conf
-rw-r--r--. 1 root root  66 Oct 23  2018 ceph.mgr.controller0.keyring
-rw-------. 1  167  167 688 Oct 23  2018 ceph.mon.keyring

所有节点都会有,要访问就必须要有密钥
这里的认证相对比那个 cephx和libvirt(cinder)的认证要简单
有密钥就行

nova与ceph的对接

控制节点的nova.api没有对应对接文件配置(在计算节点nova-compute的配置文件)
nova-compute默认会把磁盘存在本地文件系统,nova-compute来决定

去计算节点看nova的对接

[root@compute0 nova]# egrep -v "^$|^#" /var/lib/config-data/nova_libvirt/etc/nova/nova.conf | grep -A10 libvirt

[libvirt]
live_migration_uri=qemu+ssh://nova_migration@%s:2022/system?keyfile=/etc/nova/migration/identity
rbd_user=openstack
rbd_secret_uuid=fe8e3db0-d6c3-11e8-a76d-52540001fac8
images_type=rbd
images_rbd_pool=vms
images_rbd_ceph_conf=/etc/ceph/ceph.conf
virt_type=kvm
cpu_mode=none
inject_password=False
inject_key=False

支持热迁移
使用openstack用户对接
libvirt对接还需要uuid加密
使用/etc/ceph/ceph.conf 配置文件

[root@compute0 nova]# virsh secret-list
UUID                                  Usage
--------------------------------------------------------------------------------
fe8e3db0-d6c3-11e8-a76d-52540001fac8  ceph client.openstack secret
找到libvirt用户的uuid

libvirt访问ceph需要uuid加密

cinder 对接

[root@controller0 cinder]# cat /var/lib/config-data/cinder/etc/cinder/cinder.conf

[tripleo_ceph]
backend_host=hostgroup     #主机组
volume_backend_name=tripleo_ceph   
volume_driver=cinder.volume.drivers.rbd.RBDDriver 
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=openstack
rbd_pool=volumes  #块存储的池
[root@controller0 cinder]# 

创建个实例

[student@workstation ~(developer1-finance)]$ openstack server create --flavor default --key-name example-keypair --nic net-id=finance-network1 --security-group default --image cloud-init --user-data ~/user-data cloud-init 

创建磁盘

[student@workstation ~(developer1-finance)]$ openstack volume create --size 1 disk0

查看rdb块

[root@controller0 cinder]# rbd -p volumes ls
volume-7aff1e77-206e-4023-b869-d1329e3aa797

池中出现卷

[student@workstation ~(developer1-finance)]$ openstack volume create --size 1 disk0
+---------------------+------------------------------------------------------------------+
| Field               | Value                                                            |
+---------------------+------------------------------------------------------------------+
| attachments         | []                                                               |
| availability_zone   | nova                                                             |
| bootable            | false                                                            |
| consistencygroup_id | None                                                             |
| created_at          | 2020-05-31T19:06:26.000000                                       |
| description         | None                                                             |
| encrypted           | False                                                            |
| id                  | 7aff1e77-206e-4023-b869-d1329e3aa797       

粘贴卷

[student@workstation ~(developer1-finance)]$ openstack server add volume  cloud-init disk0

008有详细的对接步骤

管理临时存储

母盘位置

差分盘应该在本地存储的这里,但是存在了ceph的vmx里

[root@compute1 ~]# vi /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf

[libvirt]
live_migration_uri=qemu+ssh://nova_migration@%s:2022/system?keyfile=/etc/nova/migration/identity
rbd_user=openstack
rbd_secret_uuid=fe8e3db0-d6c3-11e8-a76d-52540001fac8
#images_type=rbd
images_rbd_pool=vms
images_rbd_ceph_conf=/etc/ceph/ceph.conf
virt_type=kvm

docker restart nova_compute

两个compute都要做

再次发放实例

[student@workstation ~(developer1-finance)]$ openstack server create --flavor default --key-name example-keypair --nic net-id=finance-network1 --security-group default --image cloud-init server1

使用virsh list --all 查看云主机创在了哪里

[root@compute0 instances]# cd b94e5b5e-d18e-4b31-af49-7b4b15df6cee/
[root@compute0 b94e5b5e-d18e-4b31-af49-7b4b15df6cee]# ls
console.log  disk  disk.info
[root@compute0 b94e5b5e-d18e-4b31-af49-7b4b15df6cee]# pwd
/var/lib/nova/instances/b94e5b5e-d18e-4b31-af49-7b4b15df6cee
[root@compute0 b94e5b5e-d18e-4b31-af49-7b4b15df6cee]# 

差分盘回到了文件系统

[cloud-user@localhost ~]$ dd if=/dev/zero of=file count=100 bs=M 
云主机的操作写到了差分盘里

[root@compute0 b94e5b5e-d18e-4b31-af49-7b4b15df6cee]# ll -h
total 128M
-rw-------. 1 root  root  224K May 31 21:16 console.log
-rw-r--r--. 1 qemu  qemu  127M May 31 21:16 disk
-rw-r--r--. 1 42436 42436   79 May 31 20:36 disk.info

差分盘慢慢变大,一开始会很小
删除实例差分盘消失

[root@compute0 b94e5b5e-d18e-4b31-af49-7b4b15df6cee]# ll -h
total 0

记得改回compute_nova的配置文件

nova临时磁盘放在cinder当中

放在file和ceph 删了云主机后临时磁盘就没了 这不是持久存储
放在cinder里就不一样,系统盘(云主机删了,系统盘也会被删除。但是)放在cinder当中的话就是永久存储

nova-compute 使用cinder将系统盘存在ceph当中

创建 cinder 用镜像填满cinder的卷

[student@workstation ~(developer1-finance)]$ openstack volume create --size 10 --image cloud-init  disk2

存在了ceph里

[root@controller0 ~]# rbd -p volumes ls
volume-58d41fcd-a4e3-4fe4-8299-27fe5f1a0920

所有的操作都在这个volumes块里,误删除数据也不会丢失

[student@workstation ~(developer1-finance)]$ openstack server create --flavor default --key-name example-keypair --nic net-id=finance-network1 --security-group default --volume disk2 server3

绑浮动IP (略)

发放云主机

通过ssh cloud-user进入云主机进行一些操作

sync,从内存写到磁盘
[cloud-user@server3 ~]$ sync
[cloud-user@server3 ~]$ 
[cloud-user@server3 ~]$ cat file.txt 
hello mao
[cloud-user@server3 ~]$ sudo -i
[root@server3 ~]# yum -y install httpd
[root@server3 ~]# echo nice-nice >> /var/www/html/index.html
[root@server3 ~]# systemctl enable httpd
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@server3 ~]# systemctl start httpd
[root@server3 ~]# init 0   #关机
Connection to 172.25.250.106 closed by remote host.

删除实例

[student@workstation ~(developer1-finance)]$ openstack server delete 32f77c8b-9230-4985-9eff-12dd05e1658d

[root@controller0 ~]# rbd -p volumes ls
volume-58d41fcd-a4e3-4fe4-8299-27fe5f1a0920

rbd任然存在,系统盘没被删除

再发放

[student@workstation ~(developer1-finance)]$ openstack server create --flavor default --key-name example-keypair --nic net-id=finance-network1 --security-group default --volume disk2 server4

绑ip

[student@workstation ~(developer1-finance)]$ openstack server add floating ip server4 172.25.250.103 

再次登录

[cloud-user@server4 ~]$ curl localhost
nice-nice

修改的操作存在了磁盘里

Manila 共享文件系统

cinder分出一个块,给web,web格式化块。使用
manila可以直接给出一个目录,云主机就可以用了(文件共享)

manila api 接受请求。给租户用户使用
manila scheduler 调度程序选择哪个存储后端将为请求提供服务
manila share 对接后端存储

后端存储 glusterfs ceph nfs

manila与ceph完成对接
创建共享文件供使用就可以

共享访问规则。未来允许哪个用户对manila进行访问

挂载ceph需要密钥文件与配置文件,还有驱动

[root@controller0 manila]# pwd
/var/lib/config-data/puppet-generated/manila/etc/manila
[root@controller0 manila]# vi manila.conf 

#enabled_share_backends = <None>
enabled_share_backends=cephfs    #文件系统

# Specify list of protocols to be allowed for share creation.
# Available values are '('NFS', 'CIFS', 'GLUSTERFS', 'HDFS', 'CEPHFS',
# 'MAPRFS')' (list value)
#enabled_share_protocols = NFS,CIFS
enabled_share_protocols=CEPHFS      #协议

管理流

[cephfs]
driver_handles_share_servers=False
share_backend_name=cephfs
share_driver=manila.share.drivers.cephfs.driver.CephFSDriver
cephfs_conf_path=/etc/ceph/ceph.conf
cephfs_auth_id=manila
cephfs_cluster_name=ceph
cephfs_enable_snapshots=False
cephfs_protocol_helper_type=CEPHFS

创建共享类型 (回想第008,也是创建类型)
创建类型
manila type-create cephfstype false (false空间不足要不要扩展true 动态开启扩展)

管理员创建类型

[student@workstation ~(developer1-finance)]$ source architect1-finance-rc 
[student@workstation ~(architect1-finance)]$ manila type-create cephfstype false

普通用户身份

[student@workstation ~(developer1-finance)]$ source developer1-finance-rc 

考试时常用--help

[student@workstation ~(developer1-finance)]$ manila create --name finance-share1 --share-type cephfstype cephfs 1


[student@workstation ~(developer1-finance)]$ manila list
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------+------+-------------------+
| ID                                   | Name           | Size | Share Proto | Status    | Is Public | Share Type Name | Host | Availability Zone |
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------+------+-------------------+
| 6a0ef8d5-aa2f-4c61-b9b0-48226ab32f8d | finance-share1 | 1    | CEPHFS      | available | False     | cephfstype      |      | nova              |
+--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------+------+-------------------+

创建实例选网络时,有先后顺序 先finance 后provides

[student@workstation ~(developer1-finance)]$ openstack server create --flavor default --image rhel7 --key-name example-keypair --nic net-id=finance-network1 --nic net-id=provider-storage --user-data /home/student/manila/user-data.file finance-server1 

生成网卡,自动获取
创建新网卡连存储

[student@workstation ~(developer1-finance)]$ cat manila/user-data.file 
#!/bin/bash
cat > /etc/sysconfig/network-scripts/ifcfg-eth1 << eof
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=dhcp
eof
ifup eth1

一条业务网络 一条存储网络

为cloud-user创建用户,使有权限去挂载

[root@controller0 ~]# ceph osd pool ls
images
metrics
backups
vms
volumes
manila_data    #数据在这     
manila_metadata   #元数据在这,检索要求高一点查询速度会快  ssd

client.manila
        key: AQAPHM9bAAAAABAACK7Xpz3Px+k3dvbBdq1OlA==
        caps: [mds] allow *
        caps: [mgr] allow *
        caps: [mon] allow r, allow command "auth del", allow command "auth caps", allow command "auth get", allow command "auth get-or-create"
        caps: [osd] allow rw



[root@controller0 ~]# ceph auth get-or-create client.cloud-user  --name=client.manila --keyring=/etc/ceph/ceph.client.manila.keyring  > /root/cloud-user.keyring

用manila用户创建了cloud-user。然后赋权

允许cloud-user 通过cephx访问 finance-share1共享

[student@workstation ~(developer1-finance)]$ manila access-allow  finance-share1 cephx cloud-user

查看

[student@workstation ~(developer1-finance)]$ manila access-list finance-share1
+--------------------------------------+-------------+------------+--------------+--------+------------------------------------------+----------------------------+----------------------------+
| id                                   | access_type | access_to  | access_level | state  | access_key                               | created_at                 | updated_at                 |
+--------------------------------------+-------------+------------+--------------+--------+------------------------------------------+----------------------------+----------------------------+
| 5ece8a68-62df-4e18-a53a-0424e9c86683 | cephx       | cloud-user | rw           | active | AQBuWdRe1Pl+ABAAq789K5CU+sVNcIbLu+Yu+Q== | 2020-06-01T01:29:09.000000 | 2020-06-01T01:29:11.000000 |
+--------------------------------------+-------------+------------+--------------+--------+------------------------------------------+----------------------------+----------------------------+
[student@workstation ~(developer1-finance)]$ 


通过此路径去挂载

[root@controller0 ~]# scp cloud-user.keyring /etc/ceph/ceph.conf student@workstation:~
student@workstation's password: 
cloud-user.keyring                                             100%   68    25.0KB/s   00:00    
ceph.conf                                                      100%  797   399.5KB/s   00:00    
[root@controller0 ~]# 

传到workstation,然后传到云主机

[student@workstation ~]$ scp ceph.conf cloud-user.keyring cloud-user@172.25.250.108:~
ceph.conf                                                      100%  797    51.3KB/s   00:00    
cloud-user.keyring                                             100%   68     4.7KB/s   00:00    
[student@workstation ~]$ 

需要安装驱动文件以及命令
ceph-fuse

sudo -i 提权到root
[root@finance-server1 yum.repos.d]# yum -y install wget

安装wget

[root@finance-server1 yum.repos.d]# wget http://materials/ceph.repo

下载源

[root@finance-server1 yum.repos.d]# yum -y install ceph-fuse

下载命令和驱动

[root@finance-server1 yum.repos.d]# mkdir /mnt/ceph
[root@finance-server1 cloud-user]# ceph-fuse /mnt/ceph/ --id cloud-user  --keyring=/home/cloud-user/cloud-user.keyring --conf=/home/cloud-user/ceph.conf --client-mountpoint=/volumes/_nogroup/46a0381b-8782-4665-a8ec-095c512977d0
ceph-fuse[1616]: starting ceph client
2020-05-31 22:02:32.346592 7fcdf2a5d0c0 -1 init, newargv = 0x55669315ed80 newargc=9
ceph-fuse[1616]: starting fuse
[root@finance-server1 cloud-user]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        10G  1.6G  8.5G  16% /
devtmpfs        898M     0  898M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M   17M  903M   2% /run
tmpfs           920M     0  920M   0% /sys/fs/cgroup
tmpfs           184M     0  184M   0% /run/user/1001
tmpfs           184M     0  184M   0% /run/user/0
ceph-fuse       1.0G     0  1.0G   0% /mnt/ceph

挂载

[student@workstation ~(developer1-finance)]$ manila --help | grep location
    share-export-location-list
                        List export locations of a given share.



[student@workstation ~(developer1-finance)]$ manila share-export-location-list finance-share1
+--------------------------------------+------------------------------------------------------------------------+-----------+
| ID                                   | Path                                                                   | Preferred |
+--------------------------------------+------------------------------------------------------------------------+-----------+
| 243cf5ed-3f42-4805-8abe-df9131669c61 | 172.24.3.1:6789:/volumes/_nogroup/46a0381b-8782-4665-a8ec-095c512977d0 | False     |
+--------------------------------------+------------------------------------------------------------------------+-----------+
可以使用命令行查看暴露的端口。图形界面也可也(一目了然)

manila部分可以通过图像界面,ceph部分还是得敲

考试题manila思路
用管理员创建cephfs类型
用普通用户使用类型创建一个共享卷
创建云主机,得先连接内部网络然后数据网络,使用脚本自动创建第二次网卡
切换到controller
使用ceph命令以manila身份创建一个cloud-user
回到workstation(developer1)节点,使cloud-user 通过cephx有权访问创建的共享卷
将controller上的ceph文件以及创建出来的用户key传给workstation
workstation传给云主机
云主机下载wget,安装ceph命令以及驱动
创建文件夹,输入一条很长且不好--help的命令挂载(指定用户指定key指定配置文件指定挂载暴露端口指定挂载点)

标签:workstation,finance,cl210,--,文件共享,ceph,root,cloud,注入
来源: https://www.cnblogs.com/supermao12/p/16368997.html