tech and whatnot...
I use several different approaches to deploy mongodb pods in development. Most of them utilizes shared nfs storage folders for volume mounts and one of which is for the storage of key-file for pod authentication.
sudo useradd mongodb
# take note of user's uid to used for mounting permissions in the yaml config
cat /etc/passwd | grep mongodb
# in this case: 1001
mongodb:x:1001:1001::/home/mongodb:/bin/sh
Create directories needed that will be shared with the worker nodes.
sudo mkdir -p /srv/nfs/kubedata /srv/nfs/mongo-keyfile
sudo chown mongodb:mongodb /srv/nfs/kubedata /srv/nfs/mongo-keyfile
Make sure NFS service is installed.
sudo apt update
sudo apt install nfs-kernel-server
Configure the NFS exports.
sudo nano /etc/exports
# Add the entries below:
/srv/nfs/mongo-keyfile *(rw,fsid=1,sync,no_subtree_check,anonuid=1001,anongid=1001)
/srv/nfs/kubedata node01(rw,sync,no_subtree_check,all_squash,anonuid=1001,anongid=1001) node02(rw,sync,no_subtree_check,all_squash,anonuid=1001,anongid=1001)
Export the shares and run and enable the NFS service.
sudo exportfs -ra
sudo systemctl start nfs-kernel-server
sudo systemctl enable nfs-kernel-server
Make sure the NFS client service is installed.
sudo apt update
sudo apt install nfs-common
Create a mount point and mount the shared folder.
sudo mkdir -p /mnt/nfs/kubedata
sudo mount <your-nfs-server-ip>:/srv/nfs/kubedata /mnt/nfs/kubedata
#verify the mount:
df -h
Make the mount persistent by adding entry to fstab.
sudo nano /etc/fstab
# add the entry below
control-plane-ip:/srv/nfs/kubedata /mnt/nfs/kubedata nfs defaults 0 0
# I'm naming mine mkey
openssl rand -base64 756 > mkey
chmod 400 mkey
sudo chown mongodb:mongodb mkey
# make sure to move it to /srv/nfs/mongo-keyfile
refer to the deployment yaml files for more details.
if running a replicaset, you need to initiate the replication process right after the first pod is created since mongodb requires initialization (rs.initiate()
) as it is an administrative task that sets up the internal replication structure. It requires MongoDB to first ensure all nodes are ready and then one of the nodes (typically the first replica) is designated as primary.
While Kubernetes can spin up multiple replicas, MongoDB itself needs to be instructed to form a replica set. The methods above help automate that process within a containerized setup..
kubectl exec -ti <pod name> -- mongosh -u <username> -p <password>
# run this inside the container instance
rs.initiate()
I don’t like the manual approach so I deploy my replicaset with a sidecar.