LINUX.ORG.RU

Ceph добавление монитора в кластер?

 , , ,


0

1

Подскажите пожалуйста почем монитор не запускается на node4, ни каких ошибок нет, ни сейчас ни при добавлению в кластер конфиг везде одинаковый руками ничего не менял делал через команд
ceph-deploy new master node1 node2 node3 node4
ceph-deploy install –release nautilus node4
ceph-deploy –overwrite-conf mon create node1 node2 node3 node4 master
ceph-deploy admin node4

конфиг у всех node одинаковый

└$► vagrant ssh master -c "cat /etc/ceph/ceph.conf"  
[global]  
fsid = 67946241-773d-4ca9-b8bc-d8e6f3f4f00f  
mon_initial_members = master, node1, node2, node3, node4  
mon_host = 172.20.1.30,172.20.1.31,172.20.1.32,172.20.1.33,172.20.1.34  
auth_cluster_required = cephx  
auth_service_required = cephx  
auth_client_required = cephx  

на других нодах также

└$► vagrant ssh node1 -c "cat /etc/ceph/ceph.conf"  
[global]  
fsid = 67946241-773d-4ca9-b8bc-d8e6f3f4f00f  
mon_initial_members = master, node1, node2, node3, node4  
mon_host = 172.20.1.30,172.20.1.31,172.20.1.32,172.20.1.33,172.20.1.34  
auth_cluster_required = cephx  
auth_service_required = cephx  
auth_client_required = cephx  

Connection to 127.0.0.1 closed.

добавляю в кластер монитор

root@master:~# ceph-deploy mon add node4  
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/vagrant/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon add node4
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : add
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fedef731910>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['node4']
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7fedefba5ad0>
[ceph_deploy.cli][INFO  ]  address                       : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][INFO  ] ensuring configuration of new mon host: node4
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node4
[node4][DEBUG ] connected to host: node4 
[node4][DEBUG ] detect platform information from remote host
[node4][DEBUG ] detect machine type
[node4][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.mon][DEBUG ] Adding mon to cluster ceph, host node4
[ceph_deploy.mon][DEBUG ] using mon address by resolving host: 172.20.1.34
[ceph_deploy.mon][DEBUG ] detecting platform for host node4 ...
[node4][DEBUG ] connected to host: node4 
[node4][DEBUG ] detect platform information from remote host
[node4][DEBUG ] detect machine type
[node4][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: Ubuntu 18.04 bionic
[node4][DEBUG ] determining if provided host has same hostname in remote
[node4][DEBUG ] get remote short hostname
[node4][DEBUG ] adding mon to node4
[node4][DEBUG ] get remote short hostname
[node4][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node4][DEBUG ] create the mon path if it does not exist
[node4][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node4/done
[node4][DEBUG ] create a done file to avoid re-doing the mon deployment
[node4][DEBUG ] create the init path if it does not exist
[node4][INFO  ] Running command: systemctl enable ceph.target
[node4][INFO  ] Running command: systemctl enable ceph-mon@node4
[node4][INFO  ] Running command: systemctl start ceph-mon@node4
[node4][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node4.asok mon_status
[node4][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node4.asok mon_status
[node4][DEBUG ] ********************************************************************************

Andrey S, [27.08.20 15:56]
[node4][DEBUG ] status for monitor: mon.node4
[node4][DEBUG ] {
[node4][DEBUG ]   "election_epoch": 7, 
[node4][DEBUG ]   "extra_probe_peers": [], 
[node4][DEBUG ]   "feature_map": {
[node4][DEBUG ]     "mon": [
[node4][DEBUG ]       {
[node4][DEBUG ]         "features": "0x3ffddff8ffacffff", 
[node4][DEBUG ]         "num": 1, 
[node4][DEBUG ]         "release": "luminous"
[node4][DEBUG ]       }
[node4][DEBUG ]     ]
[node4][DEBUG ]   }, 
[node4][DEBUG ]   "features": {
[node4][DEBUG ]     "quorum_con": "4611087854031667199", 
[node4][DEBUG ]     "quorum_mon": [
[node4][DEBUG ]       "kraken", 
[node4][DEBUG ]       "luminous", 
[node4][DEBUG ]       "mimic", 
[node4][DEBUG ]       "osdmap-prune", 
[node4][DEBUG ]       "nautilus"
[node4][DEBUG ]     ], 
[node4][DEBUG ]     "required_con": "2449958747315912708", 
[node4][DEBUG ]     "required_mon": [
[node4][DEBUG ]       "kraken", 
[node4][DEBUG ]       "luminous", 
[node4][DEBUG ]       "mimic", 
[node4][DEBUG ]       "osdmap-prune", 
[node4][DEBUG ]       "nautilus"
[node4][DEBUG ]     ]
[node4][DEBUG ]   }, 
[node4][DEBUG ]   "monmap": {
[node4][DEBUG ]     "created": "2020-08-26 23:31:47.620249", 
[node4][DEBUG ]     "epoch": 1, 
[node4][DEBUG ]     "features": {
[node4][DEBUG ]       "optional": [], 
[node4][DEBUG ]       "persistent": [
[node4][DEBUG ]         "kraken", 
[node4][DEBUG ]         "luminous", 
[node4][DEBUG ]         "mimic", 
[node4][DEBUG ]         "osdmap-prune", 
[node4][DEBUG ]         "nautilus"
[node4][DEBUG ]       ]
[node4][DEBUG ]     }, 
[node4][DEBUG ]     "fsid": "63a2d020-df5b-49cc-b06d-bf84902dd771", 
[node4][DEBUG ]     "min_mon_release": 14, 
[node4][DEBUG ]     "min_mon_release_name": "nautilus", 
[node4][DEBUG ]     "modified": "2020-08-26 23:31:47.620249", 
[node4][DEBUG ]     "mons": [
[node4][DEBUG ]       {
[node4][DEBUG ]         "addr": "172.20.1.34:6789/0", 
[node4][DEBUG ]         "name": "node4", 
[node4][DEBUG ]         "public_addr": "172.20.1.34:6789/0", 
[node4][DEBUG ]         "public_addrs": {
[node4][DEBUG ]           "addrvec": [
[node4][DEBUG ]             {
[node4][DEBUG ]               "addr": "172.20.1.34:3300", 
[node4][DEBUG ]               "nonce": 0, 
[node4][DEBUG ]               "type": "v2"
[node4][DEBUG ]             }, 
[node4][DEBUG ]             {
[node4][DEBUG ]               "addr": "172.20.1.34:6789", 
[node4][DEBUG ]               "nonce": 0, 
[node4][DEBUG ]               "type": "v1"
[node4][DEBUG ]             }
[node4][DEBUG ]           ]
[node4][DEBUG ]         }, 
[node4][DEBUG ]         "rank": 0
[node4][DEBUG ]       }
[node4][DEBUG ]     ]
[node4][DEBUG ]   }, 
[node4][DEBUG ]   "name": "node4", 
[node4][DEBUG ]   "outside_quorum": [], 
[node4][DEBUG ]   "quorum": [
[node4][DEBUG ]     0
[node4][DEBUG ]   ], 
[node4][DEBUG ]   "quorum_age": 801, 
[node4][DEBUG ]   "rank": 0, 
[node4][DEBUG ]   "state": "leader", 
[node4][DEBUG ]   "sync_provider": []
[node4][DEBUG ] }
[node4][DEBUG ] ********************************************************************************
[node4][INFO  ] monitor: mon.node4 is running

команда что запускалась выши пишет monitor: mon.node4 is running , но в кластере его нет

root@master:~# ceph mon stat
e1: 4 mons at {master=[v2:172.20.1.30:3300/0,v1:172.20.1.30:6789/0],node1=[v2:172.20.1.31:3300/0,v1:172.20.1.31:6789/0],node2=[v2:172.20.1.32:3300/0,v1:172.20.1.32:6789/0],node3=[v2:172.20.1.33:3300/0,v1:172.20.1.33:6789/0]}, election epoch 30, leader 0 master, quorum 0,1,2,3 master,node1,node2,node3

ceph-deploy давно устарел, выкиньте его. Добавляйте с помощью cephadm или вручную по документации, это несложно

Nastishka ★★★★★ ()
Ответ на: комментарий от Nastishka

нормальный ceph-deploy, просто он создает простой ceph.conf если подправить руками все начинает работать
ceph-deploy new – как бы вообще лучше не запускать, создает новый кластер
в моем случае получился новый id у node4 «fsid»: «63a2d020-df5b-49cc-b06d-bf84902dd771»
добавил в ceph.conf:

[mon.node1]
    host                       = node1  
    mon addr                   = 172.20.1.31:6789

[mon.node2]
    host                       = node2
    mon addr                   = 172.20.1.32:6789

[mon.node3]
    host                       = node3
    mon addr                   = 172.20.1.33:6789

[mon.node4]
    host                       = node4
    mon addr                   = 172.20.1.34:6789
sap78 ()
Последнее исправление: sap78 (всего исправлений: 1)
Ограничение на отправку комментариев: только для зарегистрированных пользователей