I'll try it right away.
First, check the vFilO side.
Cluster floating IPs: [192.168.4.200/24, 10.10.107.200/21]
Mount with the IP in this line. The second address is management, so I won't use it.
[email protected]> cluster-view
ID:(Delete)
Name: CLUSTER3
State: High Availability
IP: 10.10.107.200/21
Cluster floating IPs: [192.168.4.200/24, 10.10.107.200/21]
Portal floating IPs: [192.168.5.200/24]
Since: 2019-12-04 08:27:26 UTC
Timezone: Asia/Tokyo
VVOL support: true
EULA accepted date: 2019-12-04 08:29:26 UTC
Online activation support: true
License expiration date: 2020-01-03 08:27:26 UTC
NAS volume capacity: [Total: 959.7GB, Used: 6.8GB, Free: 952.9GB]
Share space (quota): [Total: 1TB, Used: 0B, Free: 1TB]
Data directors:
[Object type: DATA_SPHERE, Node name: anvil4.datacore.jp, Role: SECONDARY, Oper state: UP, Admin state: UP]
[Object type: DATA_SPHERE, Node name: anvil3.datacore.jp, Role: PRIMARY, Oper state: UP, Admin state: UP]
[email protected]>
By the way, in this cluster, Anvil's metadata is attached by VNMe, and DSX's data is attached by passing raw SSD. All network interfaces are SR-IOV and show the VF of 25Gb NIC directly. vSwitch is not through.
[email protected]> node-list
total 6
Name: anvil4.datacore.jp
Type: Product
Internal ID: 1073741829
ID: f0b912a2-4a39-5a6a-98e3-57fd9b371664
HW state: OK
Node state: MANAGED
Node mode: ONLINE
Management IP: 10.10.107.204/21
SW version: 4.2.1-41
Name: anvil3.datacore.jp
Type: Product
Internal ID: 1073741832
ID: be9d4b3a-3db8-5231-abda-f551a9481425
HW state: OK
Node state: MANAGED
Node mode: ONLINE
Management IP: 10.10.107.203/21
SW version: 4.2.1-41
Name: dsx1-1.datacore.jp
Type: Product
Internal ID: 1073741836
ID: eb6da562-672f-5a0e-a2aa-7793bfad5ce4
HW state: OK
Node state: MANAGED
Node mode: ONLINE
Management IP: 10.10.107.211/21
SW version: 4.2.1-41
Name: dsx2-1.datacore.jp
Type: Product
Internal ID: 1073741840
ID: b460f7e9-19b5-5d2f-b9e7-765d60468438
HW state: OK
Node state: MANAGED
Node mode: ONLINE
Management IP: 10.10.107.221/21
SW version: 4.2.1-41
Name: dsx1-2.datacore.jp
Type: Product
Internal ID: 1073741845
ID: 04ef7678-a610-52ff-8fda-873c5cbfc4ab
HW state: OK
Node state: MANAGED
Node mode: ONLINE
Management IP: 10.10.107.212/21
SW version: 4.2.1-41
Name: dsx2-2.datacore.jp
Type: Product
Internal ID: 1073741861
ID: 035fead9-9f57-5be7-9de9-48db15f81990
HW state: OK
Node state: MANAGED
Node mode: ONLINE
Management IP: 10.10.107.222/21
SW version: 4.2.1-41
Make it appropriately with GUI. This time I chose ** share1 **.
[email protected]> share-list --name share1
ID: 9eb6084e-98d1-4995-a2c1-1d1385476f76
Name: share1
Internal ID: 2
State: PUBLISHED
Path: /share1
All applied objectives:
[ID: be321c5e-0edd-4e4d-8fff-2406daa87c52, Internal ID: 536870912, Name: keep-online]
[ID: 3b0e981f-6f21-4b5a-80fa-d7f94681bb9c, Internal ID: 536870914, Name: optimize-for-capacity]
[ID: d45d8170-44ae-4b99-8b85-e935d9f3fcc6, Internal ID: 536870915, Name: delegate-on-open]
[ID: bc945b58-e3d7-4cac-9f29-0387a6dde0bc, Internal ID: 536870916, Name: layout-get-on-open]
[ID: 5d30faed-b46a-4a0a-9db3-b2c3a3734be9, Internal ID: 536870918, Name: durability-1-nine]
[ID: 0b0e149d-3bf2-46dc-b2f6-5dc767f1ec38, Internal ID: 536870922, Name: availability-1-nine]
[ID: 0e685071-2730-4cda-96dc-705edb321bf7, Internal ID: 536870919, Name: durability-3-nines]
Active objectives:
[ID: 3b0e981f-6f21-4b5a-80fa-d7f94681bb9c, Internal ID: 536870914, Name: optimize-for-capacity]
[ID: d45d8170-44ae-4b99-8b85-e935d9f3fcc6, Internal ID: 536870915, Name: delegate-on-open]
Size: 1TB
Warn when size crosses: 90%
Size limit state: NORMAL
Export options:
[Subnet: *, Access permissions: RW, Root-squash: false]
Participant ID: 0
Replication participants:
ID: 00bba667-737a-47de-a715-dbe2072dc3b8
Participant share internal ID: 2
Participant site name: CLUSTER3
Participant site management address: 10.10.107.200
Participant site data address: 192.168.4.200
Participant ID: 0
[email protected]>
Install your favorite Linux. This time I used CentOS7.
# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core
Driver update
# yum install /tmp/kmod-qlgc-fastlinq-8.42.9.0-1.rhel7u7.x86_64.rpm
(Omitted)
# nmcli
ens192:Connected to ens192
"QLogic FastLinQ QL45000"
ethernet (qede), 00:0C:29:13:59:0C, hw,Port 000e1ed3db68, mtu 1500
inet4 192.168.4.101/24
route4 192.168.4.0/24
inet6 fe80::97f3:ab5c:4725:2edc/64
route6 fe80::/64
route6 ff00::/8
#
# yum install nfs-utils
(Omitted)
# mount -t nfs -o v4.2 192.168.4.200:/share1 /mnt
#
# mount
(Omitted)
192.168.4.200:/share1 on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.4.101,local_lock=none,addr=192.168.4.200)
#
# df -h
File system size used Remaining used%Mount position
192.168.4.200:/share1 932G 0 932G 0% /mnt
# lsmod | grep nfs_layout_flexfiles
nfs_layout_flexfiles 43542 1
nfsv4 583218 2 nfs_layout_flexfiles
nfs 261876 4 nfsv3,nfsv4,nfs_layout_flexfiles
sunrpc 354099 19 nfs,rpcsec_gss_krb5,auth_rpcgss,lockd,nfsv3,nfsv4,nfs_layout_flexfiles,nfs_acl
I will take a peek immediately.
# ls /mnt
# ls /mnt/.snapshot/
current
# ls /mnt/.snapshot/current/
# ls /mnt/.collections/
all live open silent-access
assimilation-failed misaligned permanent-data-loss snapshot
backup not-selected replication-collision undelete
do-not-move not-selected2 scan volatile
durable offline selected
errored online selected2
There are no files, but you can see various things with NFS. Please devise and use it.
Try writing.
# vi /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
# ulimit -n
65536
# cd /tmp
# echo {1..1000} | tee testfile{0..65535}
About 60,000 files of about 4KB will be created.
~~ Continued next time. ~~
http://akishin.hatenablog.jp/entry/20130213/1360711554 https://qiita.com/kainos/items/5d8c47e64b5b06a60d0e https://access.redhat.com/documentation/ja-jp/red_hat_enterprise_linux/7/html/storage_administration_guide/nfs-pnfs
Recommended Posts