5-初始化ceph最小集群

視頻鏈接:初始化ceph最小集群
使用cephadm bootstrap初始化最小集群
> cephadm bootstrap 過程是在單一節(jié)點(diǎn)上創(chuàng)建一個(gè)小型的ceph集群,包括一個(gè)ceph monitor和一個(gè)ceph mgr,以及監(jiān)控組件,包括prometheus、node-exporter等。
```shell
## 初始化時(shí),指定了mon-ip、集群網(wǎng)段、dashboard初始用戶名和密碼
# cephadm bootstrap --mon-ip 192.168.59.241 ?--cluster-network 10.168.59.0/24 --initial-dashboard-user admin --initial-dashboard-password demo2023
Creating directory /etc/ceph for ceph.conf
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 2e1228b0-0781-11ee-aa8a-000c2921faf1
Verifying IP 192.168.59.241 port 3300 ...
Verifying IP 192.168.59.241 port 6789 ...
Mon IP `192.168.59.241` is in CIDR network `192.168.59.0/24`
Mon IP `192.168.59.241` is in CIDR network `192.168.59.0/24`
Pulling container image quay.io/ceph/ceph:v17...
Ceph version: ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 192.168.59.0/24
Setting cluster_network to 10.168.59.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr not available, waiting (4/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host ceph01...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 9...
mgr epoch 9 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:
? ? ? ? ? ? URL: https://ceph01:8443/
? ? ? ? ? ?User: admin
? ? ? ?Password: p5tuqo17we
Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/2e1228b0-0781-11ee-aa8a-000c2921faf1/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:
? ? ? ?sudo /usr/sbin/cephadm shell --fsid 2e1228b0-0781-11ee-aa8a-000c2921faf1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Or, if you are only running a single cluster on this host:
? ? ? ?sudo /usr/sbin/cephadm shell
Please consider enabling telemetry to help improve Ceph:
? ? ? ?ceph telemetry on
For more information see:
? ? ? ?https://docs.ceph.com/docs/master/mgr/telemetry/
## 也可以在初始化時(shí)指定dashboard用戶名和密碼 ?--initial-dashboard-user ?admin --initial-dashboard-password demo2023
# ls /etc/ceph/
ceph.client.admin.keyring ?ceph.conf ?ceph.pub ?rbdmap
```
- ceph.client.admin.keyring ?是具有ceph管理員的秘鑰
- ceph.conf ?是最小化配置文件
- ceph.pub ?是一個(gè)公鑰,拷貝到其他節(jié)點(diǎn)后,可以免密登錄。
> 在5個(gè)以上ceph節(jié)點(diǎn)時(shí),默認(rèn)會(huì)將其中5個(gè)節(jié)點(diǎn)當(dāng)做mon,這可以從`ceph orch ls`中看出來
```shell
# ceph orch ls
NAME ? ? ? ? ? PORTS ? ? ? ?RUNNING ?REFRESHED ?AGE ?PLACEMENT ?
alertmanager ? ?:9093,9094 ? ? ?1/1 ?7m ago ? ? 46m ?count:1 ? ?
crash ? ? ? ? ? ? ? ? ? ? ? ? ? 1/1 ?7m ago ? ? 46m ?* ? ? ? ? ?
grafana ? ? ? ??:3000 ? ? ? ? ? 1/1 ?7m ago ? ? 46m ?count:1 ? ?
mgr ? ? ? ? ? ? ? ? ? ? ? ? ? ? 1/2 ?7m ago ? ? 46m ?count:2 ? ?
mon ? ? ? ? ? ? ? ? ? ? ? ? ? ? 1/5 ?7m ago ? ? 46m ?count:5 ? ?
node-exporter ??:9100 ? ? ? ? ? 1/1 ?7m ago ? ? 46m ?* ? ? ? ? ?
prometheus ? ? ?:9095 ? ? ? ? ? 1/1 ?7m ago ? ? 46m ?count:1 ?
```
> 初始化mon后,此時(shí)集群還處于WARN狀態(tài),沒有OSD,MON的數(shù)量也才只有1個(gè),MGR也只有1個(gè),所以接下來就是先添加ceph節(jié)點(diǎn)。
```shell
# ceph -s
?cluster:
? ?id: ? ? 67ccccf2-07f6-11ee-a1c2-000c2921faf1
? ?health: HEALTH_WARN
? ? ? ? ? ?OSD count 0 < osd_pool_default_size 3
?services:
? ?mon: 1 daemons, quorum ceph01 (age 9m)
? ?mgr: ceph01.sdqukl(active, since 7m)
? ?osd: 0 osds: 0 up, 0 in
?data:
? ?pools: ? 0 pools, 0 pgs
? ?objects: 0 objects, 0 B
? ?usage: ? 0 B used, 0 B / 0 B avail
? ?pgs: ? ?
```