metro node初始化文檔(二)
3.6配置會話腳本(階段2)
注意:
●開始階段2前,請確保所有BE陣列已劃zone,且WAN IP詳細(xì)信息已準(zhǔn)備好。有關(guān)廣域網(wǎng)相關(guān)需求的詳細(xì)信息,請參見廣域網(wǎng)配置詳細(xì)信息和檢查表
●日志卷僅用于metro節(jié)點(diǎn)集群,而不是本地集群。
?
1.使用service用戶登錄Node-2-1-B。
2.使用vplex_system_config --interview命令并回答會話問題。
service@director-1-1-b:~>?vplex_system_config --interview
Do you want to re-run the phase-1 interview(y/n)?
Press "y" to re-run or "enter" to proceed with phase-2 interview.?(default: n):?<回車>
?
Taking backup of existing SCIF has been started...
Taking backup of existing SCIF has been completed.
?
Phase-2 interview process is started...
Rediscovering the array EMC-SYMMETRIX-197900206 on cluster-1
Rediscovering the array XtremIO-XtremApp-FNM00192100255 on cluster-1
List of the meta volume candidates for cluster: cluster-1
?
SERIAL NO ???? VOLUME NAME?????? ?CAPACITY ??? IO ?? STATUS ? TYPE ??????? ARRAY NAME
--------- ------------- --------
1 ???? VPD83T3:514f0c5ee4200f65 80G ?alive ????? traditional????????? XtremIO-XtremApp-FNM00192100255
2 ???? VPD83T3:514f0c5ee4200f66 80G ?alive?????? traditional????????? XtremIO-XtremApp-FNM00192100255
3 ???? VPD83T3:514f0c5ee4200f67 80G ?alive ???? traditional????????? XtremIO-XtremApp-FNM00192100255
4 ???? VPD83T3:60000970000197900206533030333736 80G alive traditional EMC-SYMMETRIX-197900206
5 ???? VPD83T3:60000970000197900206533030333737 80G alive traditional EMC-SYMMETRIX-197900206
6 ???? VPD83T3:60000970000197900206533030333738 80G alive traditional EMC-SYMMETRIX-197900206
Note: Storage volumes can be specified by name, by serial number, or by a combination of both.
Please select two volumes from different arrays (one from each array) from above list.
Enter Meta-volume candidate 1 : 1
Enter Meta-volume candidate 2 : 4
Selected volumes are: ['VPD83T3:514f0c5ee4200f65','VPD83T3:60000970000197900206533030333736']
Meta volume meta_C1_43A52L9 has been created successfully for the cluster: cluster-1
//本地集群沒有下面信息
Rediscovering the array EMC-CLARiiON-APM01203914253 on cluster-2
Rediscovering the array EMC-CLARiiON-APM01204204223 on cluster-2
List of the meta volume candidates for cluster: cluster-2
SERIAL NO ???? VOLUME NAME?????? ?CAPACITY ??? IO ?? STATUS ? TYPE ??????? ARRAY NAME
--------- ---- ----------
1 VPD83T3:600601600570510022c1f85f348b420f 100G alive traditional EMC-CLARiiON-APM01203914253
2 VPD83T3:600601600570510022c1f85f4088e1bb 100G alive traditional EMC-CLARiiON-APM01203914253
3 VPD83T3:600601600570510023c1f85fead0cab4 100G alive traditional EMC-CLARiiON-APM01203914253
4 VPD83T3:6006016005705100fac0f85fbe682e6c 80G alive traditional EMC-CLARiiON-APM01203914253
5 VPD83T3:6006016005705100fbc0f85fd15e23ea 80G alive traditional EMC-CLARiiON-APM01203914253
6 VPD83T3:6006016005705100fbc0f85fedea6ad3 80G alive traditional EMC-CLARiiON-APM01203914253
7 VPD83T3:6006016005705100fcc0f85f0bc92ce7 80G alive traditional EMC-CLARiiON-APM01203914253
8 VPD83T3:6006016005705100fdc0f85f73f57b2f 80G alive traditional EMC-CLARiiON-APM01203914253
9 VPD83T3:6006016023c0520052f2dc5fd32ab612 100G alive traditional EMC-CLARiiON-APM01204204223
Note: Storage volumes can be specified by name, by serial number, or by a combination of both. Please select two volumes from different arrays (one from each array) from above list.
Enter Meta-volume candidate 1 : 1
Enter Meta-volume candidate 2 : 9
Selected volumes are: ['VPD83T3:600601600570510022c1f85f348b420f','VPD83T3:6006016023c0520052f2dc5fd32ab612']
Meta volume meta_C2_43A22L9 has been created successfully for the cluster: cluster-2
Do you want to proceed with logging volume creation(y/n)??(default: y):?<回車>
Creating logging volume process is started...Do you want to choose the components for
?logging volumes (Y/N)?(default: y):?<回車>
Choose number of volumes for logging volumes(1/2): 2
Rediscovering the array EMC-SYMMETRIX-197900206 on cluster-1
Rediscovering the array XtremIO-XtremApp-FNM00192100255 on cluster-1
Storage volume candidate list for cluster: cluster-1
SERIAL NO ???? VOLUME NAME?????? ?CAPACITY ??? IO ?? STATUS ? TYPE ??????? ARRAY NAME
--------- ---- ----------
1 VPD83T3:514f0c5ee4200f68 5G alive traditional XtremIO-XtremApp-FNM00192100255
2 VPD83T3:514f0c5ee4200f69 5G alive traditional XtremIO-XtremApp-FNM00192100255
3 VPD83T3:514f0c5ee4200f6a 5G alive traditional XtremIO-XtremApp-FNM00192100255
4 VPD83T3:514f0c5ee4200f6b 5G alive traditional XtremIO-XtremApp-FNM00192100255
5 VPD83T3:514f0c5ee4200f6c 5G alive traditional XtremIO-XtremApp-FNM00192100255
6 VPD83T3:60000970000197900206533030333741 5G alive traditional EMC-SYMMETRIX-197900206
7 VPD83T3:60000970000197900206533030333742 5G alive traditional EMC-SYMMETRIX-197900206
8 VPD83T3:60000970000197900206533030333743 5G alive traditional EMC-SYMMETRIX-197900206
9 VPD83T3:60000970000197900206533030333744 5G alive traditional EMC-SYMMETRIX-197900206
10 VPD83T3:60000970000197900206533030333745 5G alive traditional EMC-SYMMETRIX-197900206
Note: Storage volumes can be specified by name, by serial number, or by a combination of both.
Please select two volumes from different arrays (one from each array) from above list.
Enter logging-volume candidate 1 : 1
Enter logging-volume candidate 2 : 6
Selected volumes are: ['VPD83T3:514f0c5ee4200f68','VPD83T3:60000970000197900206533030333741']
Logging volume c1_logging_43A52L9_vol has been created successfully for the cluster:cluster-1
Rediscovering the array EMC-CLARiiON-APM01203914253 on cluster-2
Rediscovering the array EMC-CLARiiON-APM01204204223 on cluster-2
Storage volume candidate list for cluster: cluster-2
SERIAL NO ???? VOLUME NAME?????? ?CAPACITY ??? IO ?? STATUS ? TYPE ??????? ARRAY NAME
--------- ---- ----------
1 VPD83T3:600601600570510069bff85ff2cbfe1d 5G alive traditional EMC-CLARiiON-APM01203914253
2 VPD83T3:60060160057051006abff85f0444fbc0 5G alive traditional EMC-CLARiiON-APM01203914253
3 VPD83T3:60060160057051006bbff85fd1f0e1a3 5G alive traditional EMC-CLARiiON-APM01203914253
4 VPD83T3:60060160057051006cbff85f3aac88b0 5G alive traditional EMC-CLARiiON-APM01203914253
5 VPD83T3:60060160057051006cbff85f775d1330 5G alive traditional EMC-CLARiiON-APM01203914253
6 VPD83T3:6006016023c0520057f2dc5f80b7892f 5G alive traditional EMC-CLARiiON-APM01204204223
7 VPD83T3:6006016023c0520058f2dc5f299f724b 5G alive traditional EMC-CLARiiON-APM01204204223
8 VPD83T3:6006016023c0520059f2dc5fdccb4be5 5G alive traditional EMC-CLARiiON-APM01204204223
9 VPD83T3:6006016023c052005af2dc5f165895f6 5G alive traditional EMC-CLARiiON-APM01204204223
10 VPD83T3:6006016023c052005af2dc5fe09dc9a2 5G alive traditional EMC-CLARiiON-APM01204204223
Note: Storage volumes can be specified by name, by serial number, or by a combination of both.
Please select two volumes from different arrays (one from each array) from above list.
Enter logging-volume candidate 1 : 1
Enter logging-volume candidate 2 : 6
Selected volumes are: ['VPD83T3:600601600570510069bff85ff2cbfe1d','VPD83T3:6006016023c0520057f2dc5f80b7892f']
Logging volume c2_logging_43A52L9_vol has been created successfully for the cluster:cluster-2
WANCOM interview process is started...
Types of wan configurations supported.
1. Routed
2. Bridged
Please select your wan type: 2
Collecting the wan details for cluster-1
Collecting Gateway, subnet, MTU & prefix for WC-00 ports at cluster-1
Enter prefix IP(should be same for all WC-00 of cluster-1)(example:192.168.35.0): 192.168.40.0
Enter netmask IP(should be same for all WC-00 of cluster-1): 255.255.255.0
Enter MTU(should be same for all WC-00 of cluster-1) (default: 1500):?<回車>
Collecting Gateway, subnet, MTU & prefix for WC-01 ports at cluster-1
Enter prefix IP(should be same for all WC-01 of cluster-1)(example:192.168.36.0): 192.168.41.0
Enter netmask IP(should be same for all WC-01 of cluster-1): 255.255.255.0
Enter MTU(should be same for all WC-01 of cluster-1) (default: 1500):?<回車>
Collecting node-1-1-A details
Enter WC-00 IP address: 192.168.40.35
Enter WC-01 IP address: 192.168.41.35
Collecting node-1-1-B details
Enter WC-00 IP address: 192.168.40.36
Enter WC-01 IP address: 192.168.41.36
Collecting the wan details for cluster-2
Collecting Gateway, subnet, MTU & prefix for WC-00 ports at cluster-2
Enter MTU(should be same for all WC-00 of cluster-2) (default: 1500):?<回車>
Collecting Gateway, subnet, MTU & prefix for WC-01 ports at cluster-2
Enter MTU(should be same for all WC-01 of cluster-2) (default: 1500):?<回車>
Collecting node-2-1-A details
Enter WC-00 IP address: 192.168.40.67
Enter WC-01 IP address: 192.168.41.67
Collecting node-2-1-B details
Enter WC-00 IP address: 192.168.40.68
Enter WC-01 IP address: 192.168.41.68
Please review the details entered.
wan_type: bridged
name: cluster-1
wan-com:
WC-00:
prefix: 192.168.40.0
gateway:
netmask: 255.255.255.0
mtu: 1500
WC-01:
prefix: 192.168.41.0
gateway:
netmask: 255.255.255.0
mtu: 1500
nodes???????
8K14Z23
interfaces:
WC-00:
ip:
address: 192.168.40.36
WC-01:
ip:
address: 192.168.41.36
8KJ0Z23
interfaces:
WC-00:
ip:
address: 192.168.40.35
WC-01:
ip:
address: 192.168.41.35
name: cluster-2
wan-com:
WC-00:
prefix: 192.168.40.0
gateway:
netmask: 255.255.255.0
mtu: 1500
WC-01:
prefix: 192.168.41.0
gateway:
netmask: 255.255.255.0
mtu: 1500
nodes
8K11Z23
interfaces:
WC-00:
ip:
address: 192.168.40.68
WC-01:
ip:
address: 192.168.41.68
8K17Z23
interfaces:
WC-00:
ip:
address: 192.168.40.67
WC-01:
ip:
address: 192.168.41.67
Please review the above details.
Do you want to proceed(y/n)? (default: y):?<回車>
?
SCIF has been copied to ['10.226.81.156', '10.226.81.155', '10.226.81.158','10.226.81.157'] successfully.
?
Phase-2 interview process is completed.
To start the phase-2 configuration, run the command "vplex_system_config -c".
service@director-1-1-b:~>
3.一旦會話過程順利完成,然后進(jìn)入下一個(gè)任務(wù)。
?
3.7應(yīng)用會話腳本(階段2)
注意:請確保前端線纜連接到SAN。如果端口顯示為no-link,則驗(yàn)證FE端口狀態(tài)時(shí)腳本失敗。
1.需要啟動(dòng)系統(tǒng)配置的第二階段,使用vplex_system_config -c命令。
service@director-1-1-b:~>?vplex_system_config –c
Active meta volume 'meta_C1_43A52L9' is already present for the cluster: cluster-1
Active meta volume 'meta_C2_43A22L9' is already present for the cluster: cluster-2
Starting the system configuration process for phase 2...
[WARNING]: Skipping plugin (/usr/lib/python3.6/site-packages/ansible/plugins/connection/saltstack.py) as it seems to be invalid: The 'cryptography' distribution was not found and is required by ansible
PLAY [localhost]
**************************************************************************************
TASK [Gathering Facts]
**************************************************************************************
ok: [localhost]
PLAY [cluster*]
**************************************************************************************
TASK [Gathering Facts]
**************************************************************************************
[WARNING]: Platform linux on host 8K14Z23 is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/
reference_appendices/interpreter_discovery.html for more information. ok: [8K14Z23]
[WARNING]: Platform linux on host 8K11Z23 is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information.ok: [8K11Z23]
[WARNING]: Platform linux on host 8KJ0Z23 is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [8KJ0Z23]
[WARNING]: Platform linux on host 8K17Z23 is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [8K17Z23]
TASK [cfg_ports : include_tasks]
**************************************************************************************
included: /opt/dell/vplex/system_config/ansible/roles/cfg_ports/tasks/configure.yml for 8K14Z23, 8KJ0Z23, 8K11Z23, 8K17Z23
TASK [cfg_ports : Get the directors]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
ok: [8KJ0Z23]
TASK [cfg_ports : Creating rest urls for ports]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8K11Z23]
ok: [8KJ0Z23]
skipping: [8K17Z23]
TASK [cfg_ports : Enabling the FE ports]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-A/ports/IO-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-B/ports/IO-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-A/ports/IO-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-B/ports/IO-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-A/ports/IO-01)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-B/ports/IO-01)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-A/ports/IO-01)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-B/ports/IO-01)
TASK [cfg_ports : Waiting for 5 seconds before checking the port status]
**********************************************************************************
Pausing for 5 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [8K14Z23]
TASK [cfg_ports : Verifying the FE ports status]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-A/ports/IO-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-B/ports/IO-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-A/ports/IO-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-B/ports/IO-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-A/ports/IO-01)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-B/ports/IO-01)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-A/ports/IO-01)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-B/ports/IO-01)
TASK [cfg_ports : include_tasks]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
//本地集群沒有下方關(guān)于wan口的wc信息
TASK [cfg_wan_interfaces : include_tasks]
**************************************************************************************
included: /opt/dell/vplex/system_config/ansible/roles/cfg_wan_interfaces/tasks/configure_wc00.yml for 8K14Z23, 8KJ0Z23, 8K11Z23, 8K17Z23
TASK [cfg_wan_interfaces : Setting the ip address for WC-00 interfaces]
***********************************************************************************
changed: [8K14Z23]
changed: [8K11Z23]
changed: [8KJ0Z23]
changed: [8K17Z23]
TASK [cfg_wan_interfaces : Configuring the route files for WC-00 interfaces]
******************************************************************************
changed: [8K14Z23]
changed: [8KJ0Z23]
changed: [8K11Z23]
changed: [8K17Z23]
TASK [cfg_wan_interfaces : Enabling WC-00 interfaces at OS level]
**************************************************************************************
[WARNING]: Consider using 'become', 'become_method', and 'become_user' rather than
running sudo
ok: [8K17Z23]
ok: [8K11Z23]
ok: [8KJ0Z23]
ok: [8K14Z23]
TASK [cfg_wan_interfaces : include_tasks]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
TASK [cfg_wan_interfaces : include_tasks]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
TASK [cfg_wan_interfaces : include_tasks]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
TASK [cfg_wan_interfaces : include_tasks]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
TASK [cfg_wan_interfaces : include_tasks]
**************************************************************************************
included: /opt/dell/vplex/system_config/ansible/roles/cfg_wan_interfaces/tasks/configure_wc01.yml for 8K14Z23, 8KJ0Z23, 8K11Z23, 8K17Z23
TASK [cfg_wan_interfaces : Setting the ip address for WC-01 interfaces]
***********************************************************************************
changed: [8K14Z23]
changed: [8KJ0Z23]
changed: [8K11Z23]
changed: [8K17Z23]
TASK [cfg_wan_interfaces : Configuring the route files for WC-01 interfaces]
******************************************************************************
changed: [8K14Z23]
changed: [8KJ0Z23]
changed: [8K11Z23]
changed: [8K17Z23]
TASK [cfg_wan_interfaces : Enabling WC-01 interfaces at OS level]
**************************************************************************************
ok: [8K17Z23]
ok: [8K14Z23]
ok: [8KJ0Z23]
ok: [8K11Z23]
TASK [cfg_wan_interfaces : include_tasks]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
TASK [cfg_wan_interfaces : include_tasks]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
TASK [cfg_udcom_paths : Creating udcom paths for wancom and nmg add site]
*********************************************************************************
skipping: [8K14Z23]
skipping: [8K11Z23]
ok: [8KJ0Z23]
ok: [8K17Z23]
TASK [cfg_validate_wan_ips : Getting cluster_1 wan_com ip addresses]
**************************************************************************************
ok: [8K14Z23] => (item=8K14Z23)
ok: [8KJ0Z23] => (item=8K14Z23)
ok: [8K14Z23] => (item=8KJ0Z23)
ok: [8K11Z23] => (item=8K14Z23)
ok: [8KJ0Z23] => (item=8KJ0Z23)
ok: [8K17Z23] => (item=8K14Z23)
ok: [8K11Z23] => (item=8KJ0Z23)
ok: [8K17Z23] => (item=8KJ0Z23)
TASK [cfg_validate_wan_ips : Getting cluster_2 wan_com ip addresses]
**************************************************************************************
ok: [8K14Z23] => (item=8K11Z23)
ok: [8KJ0Z23] => (item=8K11Z23)
ok: [8K14Z23] => (item=8K17Z23)
ok: [8K11Z23] => (item=8K11Z23)
ok: [8KJ0Z23] => (item=8K17Z23)
ok: [8K17Z23] => (item=8K11Z23)
ok: [8K11Z23] => (item=8K17Z23)
ok: [8K17Z23] => (item=8K17Z23)
TASK [cfg_validate_wan_ips : Pinging cluster-2 wan_com ip addresses from cluster-1]
***********************************************************************
skipping: [8K11Z23] => (item=192.168.40.68)
skipping: [8K11Z23] => (item=192.168.40.67)
skipping: [8K11Z23] => (item=192.168.41.68)
skipping: [8K11Z23] => (item=192.168.41.67)
skipping: [8K17Z23] => (item=192.168.40.68)
skipping: [8K17Z23] => (item=192.168.40.67)
skipping: [8K17Z23] => (item=192.168.41.68)
skipping: [8K17Z23] => (item=192.168.41.67)
ok: [8K14Z23] => (item=192.168.40.68)
ok: [8KJ0Z23] => (item=192.168.40.68)
ok: [8K14Z23] => (item=192.168.40.67)
ok: [8KJ0Z23] => (item=192.168.40.67)
ok: [8K14Z23] => (item=192.168.41.68)
ok: [8KJ0Z23] => (item=192.168.41.68)
ok: [8K14Z23] => (item=192.168.41.67)
ok: [8KJ0Z23] => (item=192.168.41.67)
TASK [cfg_validate_wan_ips : Pinging cluster-1 wan_com ip addresses from cluster-2]
***********************************************************************
skipping: [8K14Z23] => (item=192.168.40.36)
skipping: [8K14Z23] => (item=192.168.40.35)
skipping: [8K14Z23] => (item=192.168.41.36)
skipping: [8K14Z23] => (item=192.168.41.35)
skipping: [8KJ0Z23] => (item=192.168.40.36)
skipping: [8KJ0Z23] => (item=192.168.40.35)
skipping: [8KJ0Z23] => (item=192.168.41.36)
skipping: [8KJ0Z23] => (item=192.168.41.35)
ok: [8K11Z23] => (item=192.168.40.36)
ok: [8K17Z23] => (item=192.168.40.36)
ok: [8K11Z23] => (item=192.168.40.35)
ok: [8K17Z23] => (item=192.168.40.35)
ok: [8K11Z23] => (item=192.168.41.36)
ok: [8K17Z23] => (item=192.168.41.36)
ok: [8K11Z23] => (item=192.168.41.35)
ok: [8K17Z23] => (item=192.168.41.35)
TASK [cfg_wan_ports : include_tasks]
**************************************************************************************
included: /opt/dell/vplex/system_config/ansible/roles/cfg_wan_ports/tasks/configure.yml for 8K14Z23, 8KJ0Z23, 8K11Z23, 8K17Z23
TASK [cfg_wan_ports : Get the directors]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
ok: [8KJ0Z23]
TASK [cfg_wan_ports : Creating rest urls for ports]
**************************************************************************************
skipping: [8K14Z23]
ok: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
TASK [cfg_wan_ports : Enabling the WC-00 ports]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-A/ports/WC-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-B/ports/WC-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-A/ports/WC-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-B/ports/WC-00)
TASK [cfg_wan_ports : Waiting for 5 seconds before checking the port status]
******************************************************************************
Pausing for 5 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [8K14Z23]
TASK [cfg_wan_ports : Verifying the WC-00 ports status]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-A/ports/WC-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-B/ports/WC-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-A/ports/WC-00)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-B/ports/WC-00)
TASK [cfg_wan_ports : include_tasks]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
TASK [cfg_wan_ports : include_tasks]
**************************************************************************************
included: /opt/dell/vplex/system_config/ansible/roles/cfg_wan_ports/tasks/configure.yml for 8K14Z23, 8KJ0Z23, 8K11Z23, 8K17Z23
TASK [cfg_wan_ports : Get the directors]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
ok: [8KJ0Z23]
TASK [cfg_wan_ports : Creating rest urls for ports]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8K11Z23]
ok: [8KJ0Z23]
skipping: [8K17Z23]
TASK [cfg_wan_ports : Enabling the WC-01 ports]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-A/ports/WC-01)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-B/ports/WC-01)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-A/ports/WC-01)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-B/ports/WC-01)
TASK [cfg_wan_ports : Waiting for 5 seconds before checking the port status]
******************************************************************************
Pausing for 5 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [8K14Z23]
TASK [cfg_wan_ports : Verifying the WC-01 ports status]
**************************************************************************************
*************
skipping: [8K14Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-A/ports/WC-01)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-1-1-B/ports/WC-01)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-A/ports/WC-01)
ok: [8KJ0Z23] => (item=http://director-1-1-A-proxy:65102/vplex/v2/directors/director-2-1-B/ports/WC-01)
TASK [cfg_wan_ports : include_tasks]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
TASK [cfg_cluster_status : Getting the cluster status]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8K11Z23]
ok: [8KJ0Z23]
ok: [8K17Z23]
TASK [cfg_cluster_status : Getting the local_com health]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8K11Z23]
ok: [8KJ0Z23]
ok: [8K17Z23]
TASK [cfg_cluster_status : Getting the wan_com health]
**************************************************************************************
**************
skipping: [8K14Z23]
skipping: [8K11Z23]
ok: [8K17Z23]
ok: [8KJ0Z23]
TASK [cfg_cluster_status : Checking LCOM health]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
TASK [cfg_cluster_status : Checking WCOM health]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
TASK [cfg_periodic_backup : include_tasks]
**************************************************************************************
included: /opt/dell/vplex/system_config/ansible/roles/cfg_periodic_backup/tasks/configure.yml for 8K14Z23, 8KJ0Z23, 8K11Z23, 8K17Z23
TASK [cfg_periodic_backup : Starting the vplex-node-backup.timer scheduler]
*******************************************************************************
ok: [8K14Z23]
ok: [8KJ0Z23]
ok: [8K11Z23]
ok: [8K17Z23]
TASK [cfg_periodic_backup : Checking vplex-node-backup.timer status]
**************************************************************************************
ok: [8K14Z23]
ok: [8KJ0Z23]
ok: [8K11Z23]
ok: [8K17Z23]
TASK [cfg_periodic_backup : Verifying vplex-node-backup.timer status]
*************************************************************************************
ok: [8K14Z23] => {
"msg": " Active: active (waiting) since Fri 2021-04-16 18:27:40 UTC; 543ms ago"}
ok: [8KJ0Z23] => {
"msg": " Active: active (waiting) since Fri 2021-04-16 18:27:40 UTC; 546ms ago"}
ok: [8K11Z23] => {
"msg": " Active: active (waiting) since Fri 2021-04-16 18:27:41 UTC; 558ms ago"}
ok: [8K17Z23] => {
"msg": " Active: active (waiting) since Fri 2021-04-16 18:27:41 UTC; 552ms ago"}
TASK [cfg_periodic_backup : include_tasks]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
TASK [cfg_service_target_dependency : include_tasks]
**************************************************************************************
included: /opt/dell/vplex/system_config/ansible/roles/cfg_service_target_dependency/tasks/configure.yml for 8K14Z23, 8KJ0Z23, 8K11Z23, 8K17Z23
TASK [cfg_service_target_dependency : Creating 'configuration-done' file]
*********************************************************************************
changed: [8K17Z23]
changed: [8KJ0Z23]
changed: [8K14Z23]
changed: [8K11Z23]
TASK [cfg_service_target_dependency : Isolate the vplex-node target]
**************************************************************************************
ok: [8K14Z23]
ok: [8KJ0Z23]
ok: [8K11Z23]
ok: [8K17Z23]
TASK [cfg_service_target_dependency : Setting vplex-node target as default]
*******************************************************************************
ok: [8K14Z23]
ok: [8KJ0Z23]
ok: [8K11Z23]
ok: [8K17Z23]
TASK [cfg_service_target_dependency : Checking system is running]
**************************************************************************************
***
ok: [8K14Z23]
ok: [8KJ0Z23]
ok: [8K17Z23]
ok: [8K11Z23]
TASK [cfg_service_target_dependency : include_tasks]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
PLAY RECAP
**************************************************************************************
8K11Z23 : ok=24 changed=5 unreachable=0 failed=0 skipped=30 rescued=0 ignored=0
8K14Z23 : ok=27 changed=5 unreachable=0 failed=0 skipped=30 rescued=0 ignored=0
8K17Z23 : ok=28 changed=5 unreachable=0 failed=0 skipped=26 rescued=0 ignored=0
8KJ0Z23 : ok=40 changed=5 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
service@director-1-1-b:~>
3.8創(chuàng)建元卷備份
1.登錄vplexi。
2.執(zhí)行configuration metadata-backup命令創(chuàng)建備份。
?
VPlexcli:/>?configuration metadata-backup
Configuring Meta-data Backups
To configure meta-data backups you will need to select two unclaimed volumes (78G or greater), preferably on two different arrays. Backups will occur automatically each day, at a time you specify. Please note: All times are UTC and are not based on the local time.
Available Volumes for Meta-data Backup
Name ????? Capacity ? Vendor ???? IO Status ? Type????????? Array Name
---------------------------------------- -------- -------- --------- -----------
VPD83T3:60000970000197900205533034383237 80G None alive traditional EMC-SYMMETRIX-197900205
VPD83T3:60000970000197900205533034383238 80G None alive traditional EMC-SYMMETRIX-197900205
VPD83T3:60000970000197900205533035304433 80G None alive traditional EMC-SYMMETRIX-197900205
VPD83T3:60000970000197900205533035304434 80G None alive traditional EMC-SYMMETRIX-197900205
VPD83T3:600601602b714a00b702905f1dd6c085 80G DGC alive traditional EMC-CLARiiON-FNM00185000853
VPD83T3:68ccf098000eb1532694bed1ac1d3842 80G DellEMC alive traditional DellEMC-PowerStore-7777
VPD83T3:68ccf098006a944ce6980bb67490b8d6 80G DellEMC alive traditional DellEMC-PowerStore-7777
VPD83T3:68ccf09800c449d388b80020b66ee57e 80G DellEMC alive traditional DellEMC-PowerStore-7777
Please select volumes for meta-data backup, preferably from two different arrays (volume1,volume2):VPD83T3:68ccf098000eb1532694bed1ac1d3842,VPD83T3:600601602b714a00b702905f1dd6c085
What hour of the day (UTC) should the meta-data be backed up? (0..23):
Invalid hour
What hour of the day (UTC) should the meta-data be backed up? (0..23): 12
What minute of the hour should the meta-data be backed up? (0..59): 59
VPLEX is configured to back up meta-data every day at 12:59 (UTC).
Would you like to change the time the meta-data is backed up? [no]:
You have chosen to configure the backup of the meta-data. Please note: All times are UTC and are not based on the local time.
Review and Finish
Review the configuration information below. If the values are correct, enter yes (or simply accept the default and press Enter) to start the setup process. If the values are not correct, enter no to go back and make changes or to exit the setup.
?
Meta-data Backups
Meta-data will be backed up every day at 12:59.
The following volumes will be used for the backup
:VPD83T3:68ccf098000eb1532694bed1ac1d3842,VPD83T3:600601602b714a00b702905f1dd6c085
Would you like to run the setup process now? [yes]:
The metadata backup has been successfully scheduled.
?
//本地集群沒有下方cluster-2配置
登錄到Node-2-1-A vplexcli,然后使用前面提到的相同過程創(chuàng)建備份。
?
3.9檢查集群狀態(tài)
1.一旦前面的命令成功登錄到vplexcli,并檢查集群狀態(tài)和集群匯總狀態(tài),那么所有字段的集群狀態(tài)都應(yīng)該是OK,并且集群匯總應(yīng)該顯示同一個(gè)島嶼上的兩個(gè)集群。
?
VPlexcli:/>?cluster status
Cluster cluster-1
operational-status:?????????????? ?ok
transitioning-indications:
transitioning-progress:
health-state:??????????????????????? ?ok
health-indications:
local-com: ???????????????????????? ok
Cluster cluster-2
operational-status:?????????????? ?ok
transitioning-indications:
transitioning-progress:
health-state:??????????????????????? ?ok
health-indications:
local-com:??? ?????????????????? ok
?
wan-com: ok
?
VPlexcli:/>?cluster summary
Clusters:
Name ????? Cluster ID ??????? TLA ??????? Connected ??????? Expelled ? Operational ????? Status ??????? Health State
--------- ---------- ------- --------- -------- ------------------
cluster-1 ? 1 ???? 43A26L9?? ?true ??????? false? ?ok ?ok
cluster-2 ? 2 ???? 43A56L9 ??true ??????? false? ?ok ?ok
Islands:
Island ID?? ?Clusters
--------- --------------------
1 cluster-1, cluster-2
注意:一旦在集群中成功創(chuàng)建了元數(shù)據(jù)備份,并且集群狀態(tài)命令顯示為OK,則系統(tǒng)配置完成。用戶可以登錄到UI并創(chuàng)建軟配置,包括創(chuàng)建設(shè)備、虛擬卷、導(dǎo)出到主機(jī)等。
?
3.10集群見證
集群見證(CW)的支持使城域節(jié)點(diǎn)解決方案能夠通過仲裁兩個(gè)主站點(diǎn)之間的純通信故障和多站點(diǎn)架構(gòu)中的實(shí)際站點(diǎn)故障來提高整體環(huán)境可用性。對于7.0.1及以后版本,系統(tǒng)現(xiàn)在可以依賴稱為域節(jié)點(diǎn)Witness的組件。Witness是一個(gè)可選組件,設(shè)計(jì)用于部署在這樣的客戶環(huán)境中:在出現(xiàn)站點(diǎn)災(zāi)難、城域節(jié)點(diǎn)集群和集群間故障時(shí),常規(guī)首選規(guī)則集不足以提供無縫的零或接近零的RTO存儲可用性。有關(guān)集群見證(CW)配置的更多信息,請參閱城域節(jié)點(diǎn)設(shè)備的集群見證配置指南,該指南可在SolVe (https://solveonline.emc.com/solve/home/74)獲得。
?
3.11配置或更改SupportAssist
SupportAssist可以在系統(tǒng)配置完成后單獨(dú)配置,用戶可以對SupportAssist進(jìn)行如下操作:
?
1.收集SupportAssist詳細(xì)信息?vplex_system_config -interview --Supportassist-config
2.應(yīng)用SupportAssist配置?vplex_system_config --support-assist
3.顯示SupportAssist詳細(xì)信息?vplex_system_config -interview --show-supportassist
4所示。啟用SupportAssist配置?vplex_system_config --support-enable
5.禁用SupportAssist配置?vplex_system_config --support-disable
6.重置SupportAssist配置? vplex_system_config --reset-supportassist
7.更新SupportAssist網(wǎng)關(guān)?vplex_system_config –interview –update-supportassist-gateway