Setup a Highly Available NFS Cluster with disk encryption using LUKS, DRBD, Corosync and Pacemaker

  • CentOS 8 Stream
  • LUKS
  • DRBD
  • Corosync and Pacemaker

A) Preparations

  • VM #1: nfs1.example.com / 192.168.10.11
  • VM #2: nfs2.example.com / 192.168.10.12
  • Service Name / IP: nfs.example.com / 192.168.10.10

B) Setup Disk Encryption

Step 1: Encrypt the data disk

[root@nfs1 ~]# cryptsetup luksFormat /dev/sdbWARNING!
========
This will overwrite data on /dev/sdb irrevocably.
Are you sure? (Type 'yes' in capital letters): YES
Enter passphrase for /dev/sdb: abcd1234ABCD
Verify passphrase: abcd1234ABCD

Step 2: Open the encrypted disk

[root@nfs1 ~]# cryptsetup open /dev/sdb cryptedsdb
Enter passphrase for /dev/sdb:

Step 3: Enable auto-unlock of the encrypted disk on boot

[root@nfs1 ~]# mkdir /etc/luks-keys
[root@nfs1 ~]# dd if=/dev/urandom of=/etc/luks-keys/sdb_secret_key bs=512 count=8
8+0 records in
8+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 7.8853e-05 s, 51.9 MB/s
[root@nfs1 ~]# cryptsetup -v luksAddKey /dev/sdb /etc/luks-keys/sdb_secret_key
Enter any existing passphrase:
Key slot 0 unlocked.
Key slot 1 created.
Command successful.
[root@nfs1 ~]# cryptsetup luksDump /dev/sdb | egrep "Keyslots:|luks2"
Keyslots:
0: luks2
1: luks2
[root@nfs1 ~]# cryptsetup -v luksClose cryptedsdb
Command successful.
[root@nfs1 ~]# cryptsetup -v luksOpen /dev/sdb cryptedsdb --key-file=/etc/luks-keys/sdb_secret_key
Key slot 1 unlocked.
Command successful.
[root@nfs1 ~]# cryptsetup luksDump /dev/sdb | grep "UUID"
UUID: 7924d99f-8007-4970-8798-698887938626
[root@nfs1 ~]# echo "cryptedsdb UUID=7924d99f-8007-4970-8798-698887938626 /etc/luks-keys/sdb_secret_key luks" > /etc/crypttab

Step 4: Backup the LUKS header

[root@nfs1 ~]# cryptsetup -v luksHeaderBackup /dev/sdb --header-backup-file /root/LuksHeaderBackup_sdb.bin
Command successful.

Step 5: Repeat steps 1 to 4 on the second NFS VM

C) Setup DRBD Disk Replication

Step 1: Create and configure the Volume Group and Logical Volume

[root@nfs1 ~]# pvcreate /dev/mapper/cryptedsdb
Physical volume "/dev/mapper/cryptedsdb" successfully created.
[root@nfs1 ~]# vgcreate drbdvg1 /dev/mapper/cryptedsdb
Volume group "drbdvg1" successfully created
[root@nfs1 ~]# lvcreate -L 1g -n data1 drbdvg1
Logical volume "data1" created.

Step 2: Compile DRBD kernel module from source

dnf update
dnf install git gcc gcc-c++ make automake autoconf rpm-build kernel-devel kernel-rpm-macros kernel-abi-whitelists elfutils-libelf-devel
mkdir -p ~/rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}
echo '%_topdir %(echo $HOME)/rpmbuild' > ~/.rpmmacros
curl -L -O https://www.linbit.com/downloads/drbd/9.0/drbd-9.0.24-1.tar.gz
tar zxf drbd-9.0.24-1.tar.gz
cd drbd-9.0.24-1
make kmp-rpm srpm

Step 3: Install the compiled DRBD kernel module

cd ~/rpmbuild/RPMS/x86_64/
dnf localinstall -y kmod-drbd-9.0.24_4.18.0_227-1.x86_64.rpm
dnf install yum-plugin-versionlock
dnf versionlock kernel*
[root@nfs1 ~]# cat /etc/yum/pluginconf.d/versionlock.list# Added lock on Mon Aug 17 10:58:21 2020
kernel-devel-0:4.18.0-227.el8.*
kernel-rpm-macros-0:123-1.el8.*
kernel-tools-libs-0:4.18.0-227.el8.*
kernel-core-0:4.18.0-227.el8.*
kernel-tools-0:4.18.0-227.el8.*
kernel-0:4.18.0-227.el8.*
kernel-modules-0:4.18.0-227.el8.*
kernel-abi-whitelists-0:4.18.0-227.el8.*
kernel-headers-0:4.18.0-227.el8.*

Step 4: Configure SELinux and Firewall for DRBD

semanage permissive -a drbd_t
[root@nfs1 ~]# ausearch -c 'drbdsetup' --raw | audit2allow -M my-drbdsetup
******************** IMPORTANT ***********************
To make this policy package active, execute:
semodule -i my-drbdsetup.pp[root@nfs1 ~]# semodule -X 300 -i my-drbdsetup.pp
firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="<IP of remote node>" port port="7789" protocol="tcp" accept'
firewall-cmd --reload

Step 5: Configure DRBD

cd /etc/drbd.d
cp global_common.conf global_common.conf.default
global {
usage-count no;
}
common {
net {
protocol C;
}
}
resource nfsha {
on nfs1.example.com {
device /dev/drbd1;
disk /dev/mapper/drbdvg1-data1;
meta-disk internal;
address 192.168.10.11:7789;
}
on nfs2.example.com {
device /dev/drbd1;
disk /dev/mapper/drbdvg1-data1;
meta-disk internal;
address 192.168.10.12:7789;
}
}
[root@nfs1 ~]# drbdadm create-md nfsha
You want me to create a v09 style flexible-size internal meta data block.
There appears to be a v09 flexible-size internal meta data block
already in place on /dev/mapper/drbdvg1-data1 at byte offset 1073737728
Do you really want to overwrite the existing meta-data?
[need to type 'yes' to confirm] yes
md_offset 1073737728
al_offset 1073704960
bm_offset 1073672192
Found some data==> This might destroy existing data! <==Do you want to proceed?
[need to type 'yes' to confirm] yes
initializing activity log
initializing bitmap (32 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
[root@nfs2 drbd.d]# drbdadm create-md nfsha
You want me to create a v09 style flexible-size internal meta data block.
There appears to be a v09 flexible-size internal meta data block
already in place on /dev/mapper/drbdvg1-data1 at byte offset 1073737728
Do you really want to overwrite the existing meta-data?
[need to type 'yes' to confirm] yes
md_offset 1073737728
al_offset 1073704960
bm_offset 1073672192
Found some data==> This might destroy existing data! <==Do you want to proceed?
[need to type 'yes' to confirm] yes
initializing activity log
initializing bitmap (32 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
systemctl start drbd
[root@nfs1 ~]# drbdadm status nfsha
nfsha role:Secondary
disk:Inconsistent
nfs2.example.com role:Secondary
peer-disk:Inconsistent
[root@nfs2 ~]# drbdadm status nfsha
nfsha role:Secondary
disk:Inconsistent
nfs1.example.com role:Secondary
peer-disk:Inconsistent
drbdadm primary --force nfsha
[root@nfs1 ~]# drbdadm status nfsha
nfsha role:Primary
disk:UpToDate
nfs2.example.com role:Secondary
peer-disk:UpToDate
[root@nfs2 ~]# drbdadm status nfsha
nfsha role:Secondary
disk:UpToDate
nfs1.example.com role:Primary
peer-disk:UpToDate

Step 6: Setup filesystem on the DRBD device

[root@nfs1 ~]# mkfs.xfs /dev/drbd1
meta-data=/dev/drbd1 isize=512 agcount=4, agsize=65532 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=262127, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=1566, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@nfs1 ~]# mkdir /mnt/nfsdata
[root@nfs1 ~]# mount /dev/drbd1 /mnt/nfsdata
[root@nfs1 ~]# semanage fcontext -a -t nfs_t /mnt/nfsdata

D) Setup NFS Cluster with Corosync and Pacemaker

Step 1: Installation and basic setup

dnf --enablerepo=HighAvailability -y install pacemaker pcs corosync
systemctl enable --now pcsd
setsebool -P daemons_enable_cluster_mode 1
firewall-cmd --add-service=high-availability --permanent
firewall-cmd --reload
passwd hacluster
[root@nfs1 ~]# pcs host auth nfs1.example.com nfs2.example.com
Username: hacluster
Password: <password>
nfs1.example.com: Authorized
nfs2.example.com: Authorized

Step 2: Create the NFS cluster

[root@nfs1 ~]# pcs cluster setup nfs-cluster nfs1.example.com nfs2.example.com
No addresses specified for host 'nfs1.example.com', using 'nfs1.example.com'
No addresses specified for host 'nfs2.example.com', using 'nfs2.example.com'
Destroying cluster on hosts: 'nfs1.example.com', 'nfs2.example.com'...
nfs1.example.com: Successfully destroyed cluster
nfs2.example.com: Successfully destroyed cluster
Requesting remove 'pcsd settings' from 'nfs1.example.com', 'nfs2.example.com'
nfs1.example.com: successful removal of the file 'pcsd settings'
nfs2.example.com: successful removal of the file 'pcsd settings'
Sending 'corosync authkey', 'pacemaker authkey' to 'nfs1.example.com', 'nfs2.example.com'
nfs1.example.com: successful distribution of the file 'corosync authkey'
nfs1.example.com: successful distribution of the file 'pacemaker authkey'
nfs2.example.com: successful distribution of the file 'corosync authkey'
nfs2.example.com: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'nfs1.example.com', 'nfs2.example.com'
nfs1.example.com: successful distribution of the file 'corosync.conf'
nfs2.example.com: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
[root@nfs1 ~]# pcs cluster start --all
nfs1.example.com: Starting Cluster...
nfs2.example.com: Starting Cluster...
[root@nfs1 ~]# pcs cluster enable --all
nfs1.example.com: Cluster Enabled
nfs2.example.com: Cluster Enabled
[root@nfs1 ~]# pcs cluster status
Cluster Status:
Cluster Summary:
* Stack: corosync
* Current DC: nfs1.example.com (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum
* Last updated: Thu Jul 9 12:45:48 2020
* Last change: Thu Jul 9 12:45:34 2020 by hacluster via crmd on nfs1.example.com
* 2 nodes configured
* 0 resource instances configured
Node List:
* Online: [ nfs1.example.com nfs2.example.com ]
PCSD Status:
nfs1.example.com: Online
nfs2.example.com: Online
[root@nfs1 ~]# pcs status corosync
Membership information
----------------------
Nodeid Votes Name
1 1 nfs1.example.com (local)
2 1 nfs2.example.com
pcs property set stonith-enabled=false

Step 3: Add DRBD and Filesystem resources to the NFS cluster

pcs cluster cib nfs-cluster-config
pcs -f nfs-cluster-config resource create NFS-DRBD ocf:linbit:drbd drbd_resource=nfsha op monitor interval=60s
pcs -f nfs-cluster-config resource promotable NFS-DRBD promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true
pcs -f nfs-cluster-config resource create NFS-Data Filesystem device="/dev/drbd1" directory="/mnt/nfsdata" fstype="xfs" options="uquota,pquota"
pcs -f nfs-cluster-config constraint colocation add NFS-Data with NFS-DRBD-clone INFINITY with-rsc-role=Master
pcs -f nfs-cluster-config constraint order promote NFS-DRBD-clone then start NFS-Data
[root@nfs1 ~]# pcs -f nfs-cluster-config resource status
* Clone Set: NFS-DRBD-clone [NFS-DRBD] (promotable):
* Stopped: [ nfs1.example.com nfs2.example.com ]
* NFS-Data (ocf::heartbeat:Filesystem): Stopped
[root@nfs1 ~]# pcs cluster cib-push nfs-cluster-config
CIB updated
[root@nfs1 ~]# pcs resource status
* Clone Set: NFS-DRBD-clone [NFS-DRBD] (promotable):
* Masters: [ nfs1.example.com ]
* Slaves: [ nfs2.example.com ]
* NFS-Data (ocf::heartbeat:Filesystem): Started nfs1.example.com

Step 4: Add NFS Server resource to the NFS cluster

dnf install nfs-utils
firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=mountd
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --reload
cat <<EOF > /etc/exports
/mnt/nfsdata/folder1 192.168.10.0/255.255.255.0(rw,sync,root_squash)
/mnt/nfsdata/folder2 192.168.10.0/255.255.255.0(rw,sync,root_squash)
EOF
pcs -f nfs-cluster-config resource create NFS-Server systemd:nfs-server op monitor interval="30s"
pcs -f nfs-cluster-config constraint colocation add NFS-Server with NFS-Data INFINITY
pcs -f nfs-cluster-config constraint order start NFS-Data then start NFS-Server
[root@nfs1 ~]# pcs -f nfs-cluster-config resource status
* Clone Set: NFS-DRBD-clone [NFS-DRBD] (promotable):
* Stopped: [ nfs1.example.com nfs2.example.com ]
* NFS-Data (ocf::heartbeat:Filesystem): Stopped
* NFS-Server (systemd:nfs-server): Stopped
[root@nfs1 ~]# pcs cluster cib-push nfs-cluster-config
CIB updated
[root@nfs1 ~]# pcs resource status
* Clone Set: NFS-DRBD-clone [NFS-DRBD] (promotable):
* Masters: [ nfs1.example.com ]
* Slaves: [ nfs2.example.com ]
* NFS-Data (ocf::heartbeat:Filesystem): Started nfs1.example.com
* NFS-Server (systemd:nfs-server): Started nfs1.example.com

Step 4: Add Virtual IP resource to the NFS cluster

pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.10.10 cidr_netmask=32 op monitor interval=30s
pcs constraint colocation add ClusterIP with NFS-Server INFINITY
pcs constraint order start NFS-Server then start ClusterIP

Step 5: Verify the cluster

[root@nfs1 ~]# pcs status
Cluster name: nfs-cluster
Cluster Summary:
* Stack: corosync
* Current DC: nfs1.example.com (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum
* Last updated: Fri Jul 10 15:36:33 2020
* Last change: Fri Jul 10 15:36:22 2020 by root via cibadmin on nfs1.example.com
* 2 nodes configured
* 5 resource instances configured
Node List:
* Online: [ nfs1.example.com nfs2.example.com ]
Full List of Resources:
* Clone Set: NFS-DRBD-clone [NFS-DRBD] (promotable):
* Masters: [ nfs1.example.com ]
* Slaves: [ nfs2.example.com ]
* NFS-Data (ocf::heartbeat:Filesystem): Started nfs1.example.com
* NFS-Server (systemd:nfs-server): Started nfs1.example.com
* ClusterIP (ocf::heartbeat:IPaddr2): Started nfs1.example.com
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
echo "Written from node1" >> /mnt/nfsdata/folder1/testfile
pcs node standby nfs1.example.com
[root@nfs2 ~]# pcs status
Cluster name: nfs-cluster
Cluster Summary:
* Stack: corosync
* Current DC: nfs1.example.com (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum
* Last updated: Fri Jul 10 15:37:35 2020
* Last change: Fri Jul 10 15:37:26 2020 by root via cibadmin on nfs1.example.com
* 2 nodes configured
* 5 resource instances configured
Node List:
* Node nfs1.example.com: standby
* Online: [ nfs2.example.com ]
Full List of Resources:
* Clone Set: NFS-DRBD-clone [NFS-DRBD] (promotable):
* Masters: [ nfs2.example.com ]
* Stopped: [ nfs1.example.com ]
* NFS-Data (ocf::heartbeat:Filesystem): Started nfs2.example.com
* NFS-Server (systemd:nfs-server): Started nfs2.example.com
* ClusterIP (ocf::heartbeat:IPaddr2): Started nfs2.example.com
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@nfs2 ~]# cat /mnt/nfsdata/folder1/testfile
Written from node1
[root@nfs2 ~]# echo "Written from node2" >> /mnt/nfsdata/folder1/testfile
[root@nfs2 ~]# cat /mnt/nfsdata/folder1/testfile
Written from node1
Written from node2
[root@nfs2 ~]#
pcs node unstandby nfs1.example.com
pcs node standby nfs2.example.com
pcs node unstandby nfs2.example.com
[root@nfs1 ~]# pcs status
Cluster name: nfs-cluster
Cluster Summary:
* Stack: corosync
* Current DC: nfs1.example.com (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum
* Last updated: Fri Jul 10 15:38:00 2020
* Last change: Fri Jul 10 15:37:49 2020 by root via cibadmin on nfs2.example.com
* 2 nodes configured
* 5 resource instances configured
Node List:
* Online: [ nfs1.example.com nfs2.example.com ]
Full List of Resources:
* Clone Set: NFS-DRBD-clone [NFS-DRBD] (promotable):
* Masters: [ nfs1.example.com ]
* Slaves: [ nfs2.example.com ]
* NFS-Data (ocf::heartbeat:Filesystem): Started nfs1.example.com
* NFS-Server (systemd:nfs-server): Started nfs1.example.com
* ClusterIP (ocf::heartbeat:IPaddr2): Started nfs1.example.com
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@nfs1 ~]# cat /mnt/nfsdata/folder1/testfile
Written from node1
Written from node2
pcs cluster cib nfs-cluster-config

D) Setup VMware Fencing

Step 1: Configure VMware Fencing in vCenter

  1. Create a role for the user account to perform VMware fencing.
  • Role Name: Linux HA Fencing
  • Permission: System.Anonymous, System.View, VirtualMachine.Interact.PowerOff, VirtualMachine.Interact.PowerOn

Step 2: Install VMware Fence Agent

dnf install fence-agents-vmware-rest

Step 3: Add fencing to the NFS cluster

[root@nfs1 ~]# fence_vmware_rest -a vcenter.example.com -l 'nfsfence@vsphere.local' -p '<password>' --ssl-insecure -z -o status -n nfs1
Status: ON
[root@nfs1 ~]# fence_vmware_rest -a vcenter.example.com -l 'nfsfence@vsphere.local' -p '<password>' --ssl-insecure -z -o status -n nfs2
Status: ON
[root@nfs2 ~]# fence_vmware_rest -a vcenter.example.com -l 'nfsfence@vsphere.local' -p '<password>' --ssl-insecure -z -o status -n nfs1
Status: ON
[root@nfs2 ~]# fence_vmware_rest -a vcenter.example.com -l 'nfsfence@vsphere.local' -p '<password>' --ssl-insecure -z -o status -n nfs2
Status: ON
pcs stonith create Fence-vCenter fence_vmware_rest pcmk_host_map="nfs1.example.com:nfs1;nfs2.example.com:nfs2" ipaddr=vcenter.example.com ssl=1 ssl_insecure=1 login='nfsfence@vsphere.local' passwd='<password>'
[root@nfs1 ~]# pcs stonith status
* Fence-vCenter (stonith:fence_vmware_rest): Started nfs1.example.com
pcs property set stonith-enabled=true
[root@nfs1 ~]# pcs status
Cluster name: nfs-cluster
Cluster Summary:
* Stack: corosync
* Current DC: nfs2.example.com (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum
* Last updated: Tue Aug 11 14:22:31 2020
* Last change: Tue Aug 11 14:22:28 2020 by root via cibadmin on nfs2.example.com
* 2 nodes configured
* 6 resource instances configured
Node List:
* Online: [ nfs1.example.com nfs2.example.com ]
Full List of Resources:
* Clone Set: NFS-DRBD-clone [NFS-DRBD] (promotable):
* Masters: [ nfs1.example.com ]
* Slaves: [ nfs2.example.com ]
* NFS-Data (ocf::heartbeat:Filesystem): Started nfs1.example.com
* NFS-Server (systemd:nfs-server): Started nfs1.example.com
* ClusterIP (ocf::heartbeat:IPaddr2): Started nfs1.example.com
* Fence-vCenter (stonith:fence_vmware_rest): Started nfs2.example.com
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
pcs cluster cib nfs-cluster-config-with-fencing

Step 4: Test fencing of node nfs1

[root@nfs2 ~]# pcs stonith fence nfs1.example.com
Node: nfs1.example.com fenced
[root@nfs2 ~]# pcs status
Cluster name: nfs-cluster
Cluster Summary:
* Stack: corosync
* Current DC: nfs2.example.com (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum
* Last updated: Tue Aug 11 14:24:14 2020
* Last change: Tue Aug 11 14:22:28 2020 by root via cibadmin on nfs2.example.com
* 2 nodes configured
* 6 resource instances configured
Node List:
* Online: [ nfs2.example.com ]
* OFFLINE: [ nfs1.example.com ]
Full List of Resources:
* Clone Set: NFS-DRBD-clone [NFS-DRBD] (promotable):
* Masters: [ nfs2.example.com ]
* Stopped: [ nfs1.example.com ]
* NFS-Data (ocf::heartbeat:Filesystem): Started nfs2.example.com
* NFS-Server (systemd:nfs-server): Started nfs2.example.com
* ClusterIP (ocf::heartbeat:IPaddr2): Started nfs2.example.com
* Fence-vCenter (stonith:fence_vmware_rest): Started nfs2.example.com
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@nfs2 ~]# pcs status
Cluster name: nfs-cluster
Cluster Summary:
* Stack: corosync
* Current DC: nfs2.example.com (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum
* Last updated: Tue Aug 11 14:28:25 2020
* Last change: Tue Aug 11 14:22:28 2020 by root via cibadmin on nfs2.example.com
* 2 nodes configured
* 6 resource instances configured
Node List:
* Online: [ nfs1.example.com nfs2.example.com ]
Full List of Resources:
* Clone Set: NFS-DRBD-clone [NFS-DRBD] (promotable):
* Masters: [ nfs2.example.com ]
* Slaves: [ nfs1.example.com ]
* NFS-Data (ocf::heartbeat:Filesystem): Started nfs2.example.com
* NFS-Server (systemd:nfs-server): Started nfs2.example.com
* ClusterIP (ocf::heartbeat:IPaddr2): Started nfs2.example.com
* Fence-vCenter (stonith:fence_vmware_rest): Started nfs1.example.com
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

E) Custom tuning of the NFS cluster

totem {
version: 2
cluster_name: nfs-cluster
transport: knet
crypto_cipher: aes256
crypto_hash: sha256
token: 10000
}
[root@nfs1 ~]# pcs cluster sync
nfs1.example.com: Succeeded
nfs2.example.com: Succeeded
[root@nfs1 ~]# pcs cluster reload corosync
Corosync reloaded
[root@nfs1 ~]# corosync-cmapctl | grep totem.token
runtime.config.totem.token (u32) = 10000
runtime.config.totem.token_retransmit (u32) = 2380
runtime.config.totem.token_retransmits_before_loss_const (u32) = 4
runtime.config.totem.token_warning (u32) = 75
totem.token (u32) = 10000

--

--

--

Infrastructure Architect experienced in design and implementation of IT solutions for enterprises

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Use laravel eloquent traits to avoid code duplication

Top 9 Tips to Optimize Images in 2021

The EASIEST way to scrape odds from Bet365 using Python and Selenium

Best of the Week — February 21 / March 6

Imgconvren — Package to help with converting and renaming your images

Arcana Network — Community Update — January 2022

🚀 Zeropay Finance Airdrop 🎁 Reward 5 $ZEROPAY ($10) for 4K participants

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Edmond Chan

Edmond Chan

Infrastructure Architect experienced in design and implementation of IT solutions for enterprises

More from Medium

IoT in Agriculture: A Smarter Way to Farm with Internet of Things

IoT in Agriculture

DevOps and Command line notes

On the eve of IELTS LRW Tests

BGP- Prevent Transit AS using Prefix List