#GlusterFS with Kubernetes
Explore tagged Tumblr posts
virtualizationhowto · 1 year ago
Text
Top 5 Open Source Kubernetes Storage Solutions
Top 5 Open Source Kubernetes Storage Solutions #homelab #ceph #rook #glusterfs #longhorn #openebs #KubernetesStorageSolutions #OpenSourceStorageForKubernetes #CephRBDKubernetes #GlusterFSWithKubernetes #OpenEBSInKubernetes #RookStorage #LonghornKubernetes
Historically, Kubernetes storage has been challenging to configure, and it required specialized knowledge to get up and running. However, the landscape of K8s data storage has greatly evolved with many great options that are relatively easy to implement for data stored in Kubernetes clusters. Those who are running Kubernetes in the home lab as well will benefit from the free and open-source…
Tumblr media
View On WordPress
0 notes
cloudlodge · 1 year ago
Text
Top 5 Open Source Kubernetes Storage Solutions - Virtualization Howto
0 notes
computingpostcom · 2 years ago
Text
This article intends to cover in detail the installation and configuration of Rook, and how to integrate a highly available Ceph Storage Cluster to an existing kubernetes cluster. I’m performing this process on a recent deployment of Kubernetes in Rocky Linux 8 servers. But it can be used with any other Kubernetes Cluster deployed with Kubeadm or automation tools such as Kubespray and Rancher. In the initial days of Kubernetes, most applications deployed were Stateless meaning there was no need for data persistence. However, as Kubernetes becomes more popular, there was a concern around reliability when scheduling stateful services. Currently, you can use many types of storage volumes including vSphere Volumes, Ceph, AWS Elastic Block Store, Glusterfs, NFS, GCE Persistent Disk among many others. This gives us the comfort of running Stateful services that requires robust storage backend. What is Rook / Ceph? Rook is a free to use and powerful cloud-native open source storage orchestrator for Kubernetes. It provides support for a diverse set of storage solutions to natively integrate with cloud-native environments. More details about the storage solutions currently supported by Rook are captured in the project status section. Ceph is a distributed storage system that provides file, block and object storage and is deployed in large scale production clusters. Rook will enable us to automate deployment, bootstrapping, configuration, scaling and upgrading Ceph Cluster within a Kubernetes environment. Ceph is widely used in an In-House Infrastructure where managed Storage solution is rarely an option. Rook uses Kubernetes primitives to run and manage Software defined storage on Kubernetes. Key components of Rook Storage Orchestrator: Custom resource definitions (CRDs) – Used to create and customize storage clusters. The CRDs are implemented to Kubernetes during its deployment process. Rook Operator for Ceph – It automates the whole configuration of storage components and monitors the cluster to ensure it is healthy and available DaemonSet called rook-discover – It starts a pod running discovery agent on every nodes of your Kubernetes cluster to discover any raw disk devices / partitions that can be used as Ceph OSD disk. Monitoring – Rook enables Ceph Dashboard and provides metrics collectors/exporters and monitoring dashboards Features of Rook Rook enables you to provision block, file, and object storage with multiple storage providers Capability to efficiently distribute and replicate data to minimize potential loss Rook is designed to manage open-source storage technologies – NFS, Ceph, Cassandra Rook is an open source software released under the Apache 2.0 license With Rook you can hyper-scale or hyper-converge your storage clusters within Kubernetes environment Rook allows System Administrators to easily enable elastic storage in your datacenter By adopting rook as your storage orchestrator you are able to optimize workloads on commodity hardware Deploy Rook & Ceph Storage on Kubernetes Cluster These are the minimal setup requirements for the deployment of Rook and Ceph Storage on Kubernetes Cluster. A Cluster with minimum of three nodes Available raw disk devices (with no partitions or formatted filesystems) Or Raw partitions (without formatted filesystem) Or Persistent Volumes available from a storage class in block mode Step 1: Add Raw devices/partitions to nodes that will be used by Rook List all the nodes in your Kubernetes Cluster and decide which ones will be used in building Ceph Storage Cluster. I recommend you use worker nodes and not the control plane machines. [root@k8s-bastion ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster01.hirebestengineers.com Ready control-plane,master 28m v1.22.2 k8smaster02.hirebestengineers.com Ready control-plane,master 24m v1.22.2
k8smaster03.hirebestengineers.com Ready control-plane,master 23m v1.22.2 k8snode01.hirebestengineers.com Ready 22m v1.22.2 k8snode02.hirebestengineers.com Ready 21m v1.22.2 k8snode03.hirebestengineers.com Ready 21m v1.22.2 k8snode04.hirebestengineers.com Ready 21m v1.22.2 In my Lab environment, each of the worker nodes will have one raw device – /dev/vdb which we’ll add later. [root@k8s-worker-01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 40G 0 disk ├─vda1 253:1 0 1M 0 part ├─vda2 253:2 0 1G 0 part /boot ├─vda3 253:3 0 615M 0 part └─vda4 253:4 0 38.4G 0 part / [root@k8s-worker-01 ~]# free -h total used free shared buff/cache available Mem: 15Gi 209Mi 14Gi 8.0Mi 427Mi 14Gi Swap: 614Mi 0B 614Mi The following list of nodes will be used to build storage cluster. [root@kvm-private-lab ~]# virsh list | grep k8s-worker 31 k8s-worker-01-server running 36 k8s-worker-02-server running 38 k8s-worker-03-server running 41 k8s-worker-04-server running Add secondary storage to each node If using KVM hypervisor, start by listing storage pools: $ sudo virsh pool-list Name State Autostart ------------------------------ images active yes I’ll add a 40GB volume on the default storage pool. This can be done with a for loop: for domain in k8s-worker-01..4-server; do sudo virsh vol-create-as images $domain-disk-2.qcow2 40G done Command execution output: Vol k8s-worker-01-server-disk-2.qcow2 created Vol k8s-worker-02-server-disk-2.qcow2 created Vol k8s-worker-03-server-disk-2.qcow2 created Vol k8s-worker-04-server-disk-2.qcow2 created You can check image details including size using qemu-img command: $ qemu-img info /var/lib/libvirt/images/k8s-worker-01-server-disk-2.qcow2 image: /var/lib/libvirt/images/k8s-worker-01-server-disk-2.qcow2 file format: raw virtual size: 40 GiB (42949672960 bytes) disk size: 40 GiB To attach created volume(s) above to the Virtual Machine, run: for domain in k8s-worker-01..4-server; do sudo virsh attach-disk --domain $domain \ --source /var/lib/libvirt/images/$domain-disk-2.qcow2 \ --persistent --target vdb done --persistent: Make live change persistent --target vdb: Target of a disk device Confirm add is successful Disk attached successfully Disk attached successfully Disk attached successfully Disk attached successfully You can confirm that the volume was added to the vm as a block device /dev/vdb [root@k8s-worker-01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 40G 0 disk ├─vda1 253:1 0 1M 0 part ├─vda2 253:2 0 1G 0 part /boot ├─vda3 253:3 0 615M 0 part └─vda4 253:4 0 38.4G 0 part / vdb 253:16 0 40G 0 disk Step 2: Deploy Rook Storage Orchestrator Clone the rook project from Github using git command. This should be done on a machine with kubeconfig configured and confirmed to be working. You can also clone Rook’s specific branch as in release tag, for example: cd ~/ git clone --single-branch --branch release-1.8 https://github.com/rook/rook.git All nodes with available raw devices will be used for the Ceph cluster. As stated earlier, at least three nodes are required cd rook/deploy/examples/ Deploy the Rook Operator The first step when performing the deployment of deploy Rook operator is to use. Create required CRDs as specified in crds.yaml manifest: [root@k8s-bastion ceph]# kubectl create -f crds.yaml customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephclients.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephfilesystemmirrors.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectrealms.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectzonegroups.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectzones.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephrbdmirrors.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/objectbucketclaims.objectbucket.io created customresourcedefinition.apiextensions.k8s.io/objectbuckets.objectbucket.io created customresourcedefinition.apiextensions.k8s.io/volumereplicationclasses.replication.storage.openshift.io created customresourcedefinition.apiextensions.k8s.io/volumereplications.replication.storage.openshift.io created customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created Create common resources as in common.yaml file: [root@k8s-bastion ceph]# kubectl create -f common.yaml namespace/rook-ceph created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-object-bucket created serviceaccount/rook-ceph-admission-controller created clusterrole.rbac.authorization.k8s.io/rook-ceph-admission-controller-role created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-admission-controller-rolebinding created clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created clusterrole.rbac.authorization.k8s.io/rook-ceph-system created role.rbac.authorization.k8s.io/rook-ceph-system created clusterrole.rbac.authorization.k8s.io/rook-ceph-global created clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created clusterrole.rbac.authorization.k8s.io/rook-ceph-object-bucket created serviceaccount/rook-ceph-system created rolebinding.rbac.authorization.k8s.io/rook-ceph-system created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created serviceaccount/rook-ceph-osd created serviceaccount/rook-ceph-mgr created serviceaccount/rook-ceph-cmd-reporter created role.rbac.authorization.k8s.io/rook-ceph-osd created clusterrole.rbac.authorization.k8s.io/rook-ceph-osd created clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created role.rbac.authorization.k8s.io/rook-ceph-mgr created role.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-osd created rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ podsecuritypolicy.policy/00-rook-privileged created clusterrole.rbac.authorization.k8s.io/psp:rook created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-default-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-osd-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter-psp created serviceaccount/rook-csi-cephfs-plugin-sa created serviceaccount/rook-csi-cephfs-provisioner-sa created role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-plugin-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-provisioner-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created serviceaccount/rook-csi-rbd-plugin-sa created serviceaccount/rook-csi-rbd-provisioner-sa created role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-plugin-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-provisioner-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created role.rbac.authorization.k8s.io/rook-ceph-purge-osd created rolebinding.rbac.authorization.k8s.io/rook-ceph-purge-osd created serviceaccount/rook-ceph-purge-osd created Finally deploy Rook ceph operator from operator.yaml manifest file: [root@k8s-bastion ceph]# kubectl create -f operator.yaml configmap/rook-ceph-operator-config created deployment.apps/rook-ceph-operator created After few seconds Rook components should be up and running as seen below: [root@k8s-bastion ceph]# kubectl get all -n rook-ceph NAME READY STATUS RESTARTS AGE pod/rook-ceph-operator-9bf8b5959-nz6hd 1/1 Running 0 45s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/rook-ceph-operator 1/1 1 1 45s NAME DESIRED CURRENT READY AGE replicaset.apps/rook-ceph-operator-9bf8b5959 1 1 1 45s Verify the rook-ceph-operator is in the Running state before proceeding: [root@k8s-bastion ceph]# kubectl -n rook-ceph get pod NAME READY STATUS RESTARTS AGE rook-ceph-operator-76dc868c4b-zk2tj 1/1 Running 0 69s Step 3: Create a Ceph Storage Cluster on Kubernetes using Rook Now that we have prepared worker nodes by adding raw disk devices and deployed Rook operator, it is time to deploy the Ceph Storage Cluster. Let’s set default namespace to rook-ceph: # kubectl config set-context --current --namespace rook-ceph Context "kubernetes-admin@kubernetes" modified. Considering that Rook Ceph clusters can discover raw partitions by itself, it is okay to use the default cluster deployment manifest file without any modifications. [root@k8s-bastion ceph]# kubectl create -f cluster.yaml cephcluster.ceph.rook.io/rook-ceph created For any further customizations on Ceph Cluster check Ceph Cluster CRD documentation. When not using all the nodes you can expicitly define the nodes and raw devices to be used as seen in example below: storage: # cluster level storage configuration and selection useAllNodes: false useAllDevices: false nodes: - name: "k8snode01.hirebestengineers.com" devices: # specific devices to use for storage can be specified for each node - name: "sdb" - name: "k8snode03.hirebestengineers.com" devices: - name: "sdb" To view all resources created run the following command: kubectl get all -n rook-ceph Watching Pods creation in rook-ceph namespace: [root@k8s-bastion ceph]# kubectl get pods -n rook-ceph -w This is a list of Pods running in the namespace after a successful deployment: [root@k8s-bastion ceph]# kubectl get pods -n rook-ceph NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-8vrgj 3/3 Running 0 5m39s csi-cephfsplugin-9csbp 3/3 Running 0 5m39s csi-cephfsplugin-lh42b 3/3 Running 0 5m39s csi-cephfsplugin-provisioner-b54db7d9b-kh89q 6/6 Running 0 5m39s csi-cephfsplugin-provisioner-b54db7d9b-l92gm 6/6 Running 0 5m39s csi-cephfsplugin-xc8tk 3/3 Running 0 5m39s csi-rbdplugin-28th4 3/3 Running 0 5m41s csi-rbdplugin-76bhw 3/3 Running 0 5m41s csi-rbdplugin-7ll7w 3/3 Running 0 5m41s csi-rbdplugin-provisioner-5845579d68-5rt4x 6/6 Running 0 5m40s csi-rbdplugin-provisioner-5845579d68-p6m7r 6/6 Running 0 5m40s csi-rbdplugin-tjlsk 3/3 Running 0 5m41s rook-ceph-crashcollector-k8snode01.hirebestengineers.com-7ll2x6 1/1 Running 0 3m3s rook-ceph-crashcollector-k8snode02.hirebestengineers.com-8ghnq9 1/1 Running 0 2m40s rook-ceph-crashcollector-k8snode03.hirebestengineers.com-7t88qp 1/1 Running 0 3m14s rook-ceph-crashcollector-k8snode04.hirebestengineers.com-62n95v 1/1 Running 0 3m14s rook-ceph-mgr-a-7cf9865b64-nbcxs 1/1 Running 0 3m17s rook-ceph-mon-a-555c899765-84t2n 1/1 Running 0 5m47s rook-ceph-mon-b-6bbd666b56-lj44v 1/1 Running 0 4m2s rook-ceph-mon-c-854c6d56-dpzgc 1/1 Running 0 3m28s rook-ceph-operator-9bf8b5959-nz6hd 1/1 Running 0 13m rook-ceph-osd-0-5b7875db98-t5mdv 1/1 Running 0 3m6s rook-ceph-osd-1-677c4cd89-b5rq2 1/1 Running 0 3m5s rook-ceph-osd-2-6665bc998f-9ck2f 1/1 Running 0 3m3s rook-ceph-osd-3-75d7b47647-7vfm4 1/1 Running 0 2m40s rook-ceph-osd-prepare-k8snode01.hirebestengineers.com--1-6kbkn 0/1 Completed 0 3m14s rook-ceph-osd-prepare-k8snode02.hirebestengineers.com--1-5hz49 0/1 Completed 0 3m14s rook-ceph-osd-prepare-k8snode03.hirebestengineers.com--1-4b45z 0/1 Completed 0 3m14s rook-ceph-osd-prepare-k8snode04.hirebestengineers.com--1-4q8cs 0/1 Completed 0 3m14s Each worker node will have a Job to add OSDs into Ceph Cluster: [root@k8s-bastion ceph]# kubectl get -n rook-ceph jobs.batch NAME COMPLETIONS DURATION AGE rook-ceph-osd-prepare-k8snode01.hirebestengineers.com 1/1 11s 3m46s rook-ceph-osd-prepare-k8snode02.hirebestengineers.com 1/1 34s 3m46s rook-ceph-osd-prepare-k8snode03.hirebestengineers.com 1/1 10s 3m46s rook-ceph-osd-prepare-k8snode04.hirebestengineers.com 1/1 9s 3m46s [root@k8s-bastion ceph]# kubectl describe jobs.batch rook-ceph-osd-prepare-k8snode01.hirebestengineers.com Verify that the cluster CR has been created and active: [root@k8s-bastion ceph]# kubectl -n rook-ceph get cephcluster NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL rook-ceph /var/lib/rook 3 3m50s Ready Cluster created successfully HEALTH_OK
Step 4: Deploy Rook Ceph toolbox in Kubernetes TheRook Ceph toolbox is a container with common tools used for rook debugging and testing. The toolbox is based on CentOS and any additional tools can be easily installed via yum. We will start a toolbox pod in an Interactive mode for us to connect and execute Ceph commands from a shell. Change to ceph directory: cd ~/ cd rook/deploy/examples Apply the toolbox.yaml manifest file to create toolbox pod: [root@k8s-bastion ceph]# kubectl apply -f toolbox.yaml deployment.apps/rook-ceph-tools created Connect to the pod using kubectl command with exec option: [root@k8s-bastion ~]# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash [root@rook-ceph-tools-96c99fbf-qb9cj /]# Check Ceph Storage Cluster Status. Be keen on the value of cluster.health, it should beHEALTH_OK. [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph status cluster: id: 470b7cde-7355-4550-bdd2-0b79d736b8ac health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 5m) mgr: a(active, since 4m) osd: 4 osds: 4 up (since 4m), 4 in (since 5m) data: pools: 1 pools, 128 pgs objects: 0 objects, 0 B usage: 25 MiB used, 160 GiB / 160 GiB avail pgs: 128 active+clean List all OSDs to check their current status. They should exist and be up. [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0 k8snode04.hirebestengineers.com 6776k 39.9G 0 0 0 0 exists,up 1 k8snode03.hirebestengineers.com 6264k 39.9G 0 0 0 0 exists,up 2 k8snode01.hirebestengineers.com 6836k 39.9G 0 0 0 0 exists,up 3 k8snode02.hirebestengineers.com 6708k 39.9G 0 0 0 0 exists,up Check raw storage and pools: [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 160 GiB 160 GiB 271 MiB 271 MiB 0.17 TOTAL 160 GiB 160 GiB 271 MiB 271 MiB 0.17 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 32 0 B 0 0 B 0 51 GiB replicapool 3 32 35 B 8 24 KiB 0 51 GiB k8fs-metadata 8 128 91 KiB 24 372 KiB 0 51 GiB k8fs-data0 9 32 0 B 0 0 B 0 51 GiB [root@rook-ceph-tools-96c99fbf-qb9cj /]# rados df POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR device_health_metrics 0 B 0 0 0 0 0 0 0 0 B 0 0 B 0 B 0 B k8fs-data0 0 B 0 0 0 0 0 0 1 1 KiB 2 1 KiB 0 B 0 B k8fs-metadata 372 KiB 24 0 72 0 0 0 351347 172 MiB 17 26 KiB 0 B 0 B replicapool 24 KiB 8 0 24 0 0 0 999 6.9 MiB 1270 167 MiB 0 B 0 B total_objects 32 total_used 271 MiB total_avail 160 GiB total_space 160 GiB Step 5: Working with Ceph Cluster Storage Modes You have three types of storage exposed by Rook: Shared Filesystem: Create a filesystem to be shared across multiple pods (RWX) Block: Create block storage to be consumed by a pod (RWO) Object: Create an object store that is accessible inside or outside the Kubernetes cluster All the necessary files for either storage mode are available in rook/cluster/examples/kubernetes/ceph/ directory. cd ~/ cd rook/deploy/examples 1. Cephfs Cephfs is used to enable shared filesystem which can be mounted with read/write permission from multiple pods.
Update the filesystem.yaml file by setting data pool name, replication size e.t.c. [root@k8s-bastion ceph]# vim filesystem.yaml apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: k8sfs namespace: rook-ceph # namespace:cluster Once done with modifications let Rook operator create all the pools and other resources necessary to start the service: [root@k8s-bastion ceph]# kubectl create -f filesystem.yaml cephfilesystem.ceph.rook.io/k8sfs created Access Rook toolbox pod and check if metadata and data pools are created. [root@k8s-bastion ceph]# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph fs ls name: k8sfs, metadata pool: k8sfs-metadata, data pools: [k8sfs-data0 ] [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph osd lspools 1 device_health_metrics 3 replicapool 8 k8fs-metadata 9 k8fs-data0 [root@rook-ceph-tools-96c99fbf-qb9cj /]# exit Update the fsName and pool name in Cephfs Storageclass configuration file: $ vim csi/cephfs/storageclass.yaml parameters: clusterID: rook-ceph # namespace:cluster fsName: k8sfs pool: k8fs-data0 Create StorageClass using the command: [root@k8s-bastion csi]# kubectl create -f csi/cephfs/storageclass.yaml storageclass.storage.k8s.io/rook-cephfs created List available storage classes in your Kubernetes Cluster: [root@k8s-bastion csi]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 97s Create test PVC and Pod to test usage of Persistent Volume. [root@k8s-bastion csi]# kubectl create -f csi/cephfs/pvc.yaml persistentvolumeclaim/cephfs-pvc created [root@k8s-bastion ceph]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-fd024cc0-dcc3-4a1d-978b-a166a2f65cdb 1Gi RWO rook-cephfs 4m42s [root@k8s-bastion csi]# kubectl create -f csi/cephfs/pod.yaml pod/csicephfs-demo-pod created PVC creation manifest file contents: --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: rook-cephfs Checking PV creation logs as captured by the provisioner pod: [root@k8s-bastion csi]# kubectl logs deploy/csi-cephfsplugin-provisioner -f -c csi-provisioner [root@k8s-bastion ceph]# kubectl get pods | grep csi-cephfsplugin-provision csi-cephfsplugin-provisioner-b54db7d9b-5dpt6 6/6 Running 0 4m30s csi-cephfsplugin-provisioner-b54db7d9b-wrbxh 6/6 Running 0 4m30s If you made an update and provisioner didn’t pick you can always restart the Cephfs Provisioner Pods: # Gracefully $ kubectl delete pod -l app=csi-cephfsplugin-provisioner # Forcefully $ kubectl delete pod -l app=csi-cephfsplugin-provisioner --grace-period=0 --force 2. RBD Block storage allows a single pod to mount storage (RWO mode). Before Rook can provision storage, a StorageClass and CephBlockPool need to be created [root@k8s-bastion ~]# cd [root@k8s-bastion ~]# cd rook/deploy/examples [root@k8s-bastion csi]# kubectl create -f csi/rbd/storageclass.yaml cephblockpool.ceph.rook.io/replicapool created storageclass.storage.k8s.io/rook-ceph-block created [root@k8s-bastion csi]# kubectl create -f csi/rbd/pvc.yaml persistentvolumeclaim/rbd-pvc created List StorageClasses and PVCs: [root@k8s-bastion csi]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 49s rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 6h17m
[root@k8s-bastion csi]# kubectl get pvc rbd-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rbd-pvc Bound pvc-c093e6f7-bb4e-48df-84a7-5fa99fe81138 1Gi RWO rook-ceph-block 43s Deploying multiple apps We will create a sample application to consume the block storage provisioned by Rook with the classic wordpress and mysql apps. Both of these apps will make use of block volumes provisioned by Rook. [root@k8s-bastion ~]# cd [root@k8s-bastion ~]# cd rook/deploy/examples [root@k8s-bastion kubernetes]# kubectl create -f mysql.yaml service/wordpress-mysql created persistentvolumeclaim/mysql-pv-claim created deployment.apps/wordpress-mysql created [root@k8s-bastion kubernetes]# kubectl create -f wordpress.yaml service/wordpress created persistentvolumeclaim/wp-pv-claim created deployment.apps/wordpress created Both of these apps create a block volume and mount it to their respective pod. You can see the Kubernetes volume claims by running the following: [root@k8smaster01 kubernetes]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-aa972f9d-ab53-45f6-84c1-35a192339d2e 1Gi RWO rook-cephfs 2m59s mysql-pv-claim Bound pvc-4f1e541a-1d7c-49b3-93ef-f50e74145057 20Gi RWO rook-ceph-block 10s rbd-pvc Bound pvc-68e680c1-762e-4435-bbfe-964a4057094a 1Gi RWO rook-ceph-block 47s wp-pv-claim Bound pvc-fe2239a5-26c0-4ebc-be50-79dc8e33dc6b 20Gi RWO rook-ceph-block 5s Check deployment of MySQL and WordPress Services: [root@k8s-bastion kubernetes]# kubectl get deploy wordpress wordpress-mysql NAME READY UP-TO-DATE AVAILABLE AGE wordpress 1/1 1 1 2m46s wordpress-mysql 1/1 1 1 3m8s [root@k8s-bastion kubernetes]# kubectl get svc wordpress wordpress-mysql NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress LoadBalancer 10.98.120.112 80:32046/TCP 3m39s wordpress-mysql ClusterIP None 3306/TCP 4m1s Retrieve WordPress NodePort and test URL using LB IP address and the port. NodePort=$(kubectl get service wordpress -o jsonpath='.spec.ports[0].nodePort') echo $NodePort Cleanup Storage test PVC and pods [root@k8s-bastion kubernetes]# kubectl delete -f mysql.yaml service "wordpress-mysql" deleted persistentvolumeclaim "mysql-pv-claim" deleted deployment.apps "wordpress-mysql" deleted [root@k8s-bastion kubernetes]# kubectl delete -f wordpress.yaml service "wordpress" deleted persistentvolumeclaim "wp-pv-claim" deleted deployment.apps "wordpress" deleted # Cephfs cleanup [root@k8s-bastion kubernetes]# kubectl delete -f ceph/csi/cephfs/pod.yaml [root@k8s-bastion kubernetes]# kubectl delete -f ceph/csi/cephfs/pvc.yaml # RBD Cleanup [root@k8s-bastion kubernetes]# kubectl delete -f ceph/csi/rbd/pod.yaml [root@k8s-bastion kubernetes]# kubectl delete -f ceph/csi/rbd/pvc.yaml Step 6: Accessing Ceph Dashboard The Ceph dashboard gives you an overview of the status of your Ceph cluster: The overall health The status of the mon quorum The sstatus of the mgr, and osds Status of other Ceph daemons View pools and PG status Logs for the daemons, and much more. List services in rook-ceph namespace: [root@k8s-bastion ceph]# kubectl get svc -n rook-ceph NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE csi-cephfsplugin-metrics ClusterIP 10.105.10.255 8080/TCP,8081/TCP 9m56s csi-rbdplugin-metrics ClusterIP 10.96.5.0 8080/TCP,8081/TCP 9m57s rook-ceph-mgr ClusterIP 10.103.171.189 9283/TCP 7m31s rook-ceph-mgr-dashboard ClusterIP 10.102.140.148 8443/TCP 7m31s
rook-ceph-mon-a ClusterIP 10.102.120.254 6789/TCP,3300/TCP 10m rook-ceph-mon-b ClusterIP 10.97.249.82 6789/TCP,3300/TCP 8m19s rook-ceph-mon-c ClusterIP 10.99.131.50 6789/TCP,3300/TCP 7m46s From the output we can confirm port 8443 was configured. Use port forwarding to access the dashboard: $ kubectl port-forward service/rook-ceph-mgr-dashboard 8443:8443 -n rook-ceph Forwarding from 127.0.0.1:8443 -> 8443 Forwarding from [::1]:8443 -> 8443 Now, should be accessible over https://locallhost:8443 Login username is admin and password can be extracted using the following command: kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="['data']['password']" | base64 --decode && echo Access Dashboard with Node Port To create a service with the NodePort, save this yaml as dashboard-external-https.yaml. # cd # vim dashboard-external-https.yaml apiVersion: v1 kind: Service metadata: name: rook-ceph-mgr-dashboard-external-https namespace: rook-ceph labels: app: rook-ceph-mgr rook_cluster: rook-ceph spec: ports: - name: dashboard port: 8443 protocol: TCP targetPort: 8443 selector: app: rook-ceph-mgr rook_cluster: rook-ceph sessionAffinity: None type: NodePort Create a service that listens on Node Port: [root@k8s-bastion ~]# kubectl create -f dashboard-external-https.yaml service/rook-ceph-mgr-dashboard-external-https created Check new service created: [root@k8s-bastion ~]# kubectl -n rook-ceph get service rook-ceph-mgr-dashboard-external-https NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mgr-dashboard-external-https NodePort 10.103.91.41 8443:32573/TCP 2m43s In this example, port 32573 will be opened to expose port 8443 from the ceph-mgr pod. Now you can enter the URL in your browser such as https://[clusternodeip]:32573 and the dashboard will appear. Login with admin username and password decoded from rook-ceph-dashboard-password secret. kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="['data']['password']" | base64 --decode && echo Ceph dashboard view: Hosts list: Bonus: Tearing Down the Ceph Cluster If you want to tear down the cluster and bring up a new one, be aware of the following resources that will need to be cleaned up: rook-ceph namespace: The Rook operator and cluster created by operator.yaml and cluster.yaml (the cluster CRD) /var/lib/rook: Path on each host in the cluster where configuration is cached by the ceph mons and osds All CRDs in the cluster. [root@k8s-bastion ~]# kubectl get crds NAME CREATED AT apiservers.operator.tigera.io 2021-09-24T18:09:12Z bgpconfigurations.crd.projectcalico.org 2021-09-24T18:09:12Z bgppeers.crd.projectcalico.org 2021-09-24T18:09:12Z blockaffinities.crd.projectcalico.org 2021-09-24T18:09:12Z cephclusters.ceph.rook.io 2021-09-30T20:32:10Z clusterinformations.crd.projectcalico.org 2021-09-24T18:09:12Z felixconfigurations.crd.projectcalico.org 2021-09-24T18:09:12Z globalnetworkpolicies.crd.projectcalico.org 2021-09-24T18:09:12Z globalnetworksets.crd.projectcalico.org 2021-09-24T18:09:12Z hostendpoints.crd.projectcalico.org 2021-09-24T18:09:12Z imagesets.operator.tigera.io 2021-09-24T18:09:12Z installations.operator.tigera.io 2021-09-24T18:09:12Z ipamblocks.crd.projectcalico.org 2021-09-24T18:09:12Z ipamconfigs.crd.projectcalico.org 2021-09-24T18:09:12Z ipamhandles.crd.projectcalico.org 2021-09-24T18:09:12Z ippools.crd.projectcalico.org 2021-09-24T18:09:12Z
kubecontrollersconfigurations.crd.projectcalico.org 2021-09-24T18:09:12Z networkpolicies.crd.projectcalico.org 2021-09-24T18:09:12Z networksets.crd.projectcalico.org 2021-09-24T18:09:12Z tigerastatuses.operator.tigera.io 2021-09-24T18:09:12Z Edit the CephCluster and add the cleanupPolicy kubectl -n rook-ceph patch cephcluster rook-ceph --type merge -p '"spec":"cleanupPolicy":"confirmation":"yes-really-destroy-data"' Delete block storage and file storage: cd ~/ cd rook/deploy/examples kubectl delete -n rook-ceph cephblockpool replicapool kubectl delete -f csi/rbd/storageclass.yaml kubectl delete -f filesystem.yaml kubectl delete -f csi/cephfs/storageclass.yaml Delete the CephCluster Custom Resource: [root@k8s-bastion ~]# kubectl -n rook-ceph delete cephcluster rook-ceph cephcluster.ceph.rook.io "rook-ceph" deleted Verify that the cluster CR has been deleted before continuing to the next step. kubectl -n rook-ceph get cephcluster Delete the Operator and related Resources kubectl delete -f operator.yaml kubectl delete -f common.yaml kubectl delete -f crds.yaml Zapping Devices # Set the raw disk / raw partition path DISK="/dev/vdb" # Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean) # Install: yum install gdisk -y Or apt install gdisk sgdisk --zap-all $DISK # Clean hdds with dd dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync # Clean disks such as ssd with blkdiscard instead of dd blkdiscard $DISK # These steps only have to be run once on each node # If rook sets up osds using ceph-volume, teardown leaves some devices mapped that lock the disks. ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove % # ceph-volume setup can leave ceph- directories in /dev and /dev/mapper (unnecessary clutter) rm -rf /dev/ceph-* rm -rf /dev/mapper/ceph--* # Inform the OS of partition table changes partprobe $DISK Removing the Cluster CRD Finalizer: for CRD in $(kubectl get crd -n rook-ceph | awk '/ceph.rook.io/ print $1'); do kubectl get -n rook-ceph "$CRD" -o name | \ xargs -I kubectl patch -n rook-ceph --type merge -p '"metadata":"finalizers": [null]' done If the namespace is still stuck in Terminating state as seen in the command below: $ kubectl get ns rook-ceph NAME STATUS AGE rook-ceph Terminating 23h You can check which resources are holding up the deletion and remove the finalizers and delete those resources. kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n rook-ceph From my output the resource is configmap named rook-ceph-mon-endpoints: NAME DATA AGE configmap/rook-ceph-mon-endpoints 4 23h Delete the resource manually: # kubectl delete configmap/rook-ceph-mon-endpoints -n rook-ceph configmap "rook-ceph-mon-endpoints" deleted Recommended reading: Rook Best Practices for Running Ceph on Kubernetes
0 notes
mmorellm · 4 years ago
Quote
Open Source Definitely Changed Storage Industry With Linux and other technologies and products, it impacts all areas. By Philippe Nicolas | February 16, 2021 at 2:23 pm It’s not a breaking news but the impact of open source in the storage industry was and is just huge and won’t be reduced just the opposite. For a simple reason, the developers community is the largest one and adoption is so wide. Some people see this as a threat and others consider the model as a democratic effort believing in another approach. Let’s dig a bit. First outside of storage, here is the list some open source software (OSS) projects that we use every day directly or indirectly: Linux and FreeBSD of course, Kubernetes, OpenStack, Git, KVM, Python, PHP, HTTP server, Hadoop, Spark, Lucene, Elasticsearch (dual license), MySQL, PostgreSQL, SQLite, Cassandra, Redis, MongoDB (under SSPL), TensorFlow, Zookeeper or some famous tools and products like Thunderbird, OpenOffice, LibreOffice or SugarCRM. The list is of course super long, very diverse and ubiquitous in our world. Some of these projects initiated some wave of companies creation as they anticipate market creation and potentially domination. Among them, there are Cloudera and Hortonworks, both came public, promoting Hadoop and they merged in 2019. MariaDB as a fork of MySQL and MySQL of course later acquired by Oracle. DataStax for Cassandra but it turns out that this is not always a safe destiny … Coldago Research estimated that the entire open source industry will represent $27+ billion in 2021 and will pass the barrier of $35 billion in 2024. Historically one of the roots came from the Unix – Linux transition. In fact, Unix was largely used and adopted but represented a certain price and the source code cost was significant, even prohibitive. Projects like Minix and Linux developed and studied at universities and research centers generated tons of users and adopters with many of them being contributors. Is it similar to a religion, probably not but for sure a philosophy. Red Hat, founded in 1993, has demonstrated that open source business could be big and ready for a long run, the company did its IPO in 1999 and had an annual run rate around $3 billion. The firm was acquired by IBM in 2019 for $34 billion, amazing right. Canonical, SUSE, Debian and a few others also show interesting development paths as companies or as communities. Before that shift, software developments were essentially applications as system software meant cost and high costs. Also a startup didn’t buy software with the VC money they raised as it could be seen as suicide outside of their mission. All these contribute to the open source wave in all directions. On the storage side, Linux invited students, research centers, communities and start-ups to develop system software and especially block storage approach and file system and others like object storage software. Thus we all know many storage software start-ups who leveraged Linux to offer such new storage models. We didn’t see lots of block storage as a whole but more open source operating system with block (SCSI based) storage included. This is bit different for file and object storage with plenty of offerings. On the file storage side, the list is significant with disk file systems and distributed ones, the latter having multiple sub-segments as well. Below is a pretty long list of OSS in the storage world. Block Storage Linux-LIO, Linux SCST & TGT, Open-iSCSI, Ceph RBD, OpenZFS, NexentaStor (Community Ed.), Openfiler, Chelsio iSCSI, Open vStorage, CoprHD, OpenStack Cinder File Storage Disk File Systems: XFS, OpenZFS, Reiser4 (ReiserFS), ext2/3/4 Distributed File Systems (including cluster, NAS and parallel to simplify the list): Lustre, BeeGFS, CephFS, LizardFS, MooseFS, RozoFS, XtreemFS, CohortFS, OrangeFS (PVFS2), Ganesha, Samba, Openfiler, HDFS, Quantcast, Sheepdog, GlusterFS, JuiceFS, ScoutFS, Red Hat GFS2, GekkoFS, OpenStack Manila Object Storage Ceph RADOS, MinIO, Seagate CORTX, OpenStack Swift, Intel DAOS Other data management and storage related projects TAR, rsync, OwnCloud, FileZilla, iRODS, Amanda, Bacula, Duplicati, KubeDR, Velero, Pydio, Grau Data OpenArchive The impact of open source is obvious both on commercial software but also on other emergent or small OSS footprint. By impact we mean disrupting established market positions with radical new approach. It is illustrated as well by commercial software embedding open source pieces or famous largely adopted open source product that prevent some initiatives to take off. Among all these scenario, we can list XFS, OpenZFS, Ceph and MinIO that shake commercial models and were even chosen by vendors that don’t need to develop themselves or sign any OEM deal with potential partners. Again as we said in the past many times, the Build, Buy or Partner model is also a reality in that world. To extend these examples, Ceph is recommended to be deployed with XFS disk file system for OSDs like OpenStack Swift. As these last few examples show, obviously open source projets leverage other open source ones, commercial software similarly but we never saw an open source project leveraging a commercial one. This is a bit antinomic. This acts as a trigger to start a development of an open source project offering same functions. OpenZFS is also used by Delphix, Oracle and in TrueNAS. MinIO is chosen by iXsystems embedded in TrueNAS, Datera, Humio, Robin.IO, McKesson, MapR (now HPE), Nutanix, Pavilion Data, Portworx (now Pure Storage), Qumulo, Splunk, Cisco, VMware or Ugloo to name a few. SoftIron leverages Ceph and build optimized tailored systems around it. The list is long … and we all have several examples in mind. Open source players promote their solutions essentially around a community and enterprise editions, the difference being the support fee, the patches policies, features differences and of course final subscription fees. As we know, innovations come often from small agile players with a real difficulties to approach large customers and with doubt about their longevity. Choosing the OSS path is a way to be embedded and selected by larger providers or users directly, it implies some key questions around business models. Another dimension of the impact on commercial software is related to the behaviors from universities or research centers. They prefer to increase budget to hardware and reduce software one by using open source. These entities have many skilled people, potentially time, to develop and extend open source project and contribute back to communities. They see, in that way to work, a positive and virtuous cycle, everyone feeding others. Thus they reach new levels of performance gaining capacity, computing power … finally a decision understandable under budget constraints and pressure. Ceph was started during Sage Weil thesis at UCSC sponsored by the Advanced Simulation and Computing Program (ASC), including Sandia National Laboratories (SNL), Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL). There is a lot of this, famous example is Lustre but also MarFS from LANL, GekkoFS from University of Mainz, Germany, associated with the Barcelona Supercomputing Center or BeeGFS, formerly FhGFS, developed by the Fraunhofer Center for High Performance Computing in Germany as well. Lustre was initiated by Peter Braam in 1999 at Carnegie Mellon University. Projects popped up everywhere. Collaboration software as an extension to storage see similar behaviors. OwnCloud, an open source file sharing and collaboration software, is used and chosen by many universities and large education sites. At the same time, choosing open source components or products as a wish of independence doesn’t provide any kind of life guarantee. Rremember examples such HDFS, GlusterFS, OpenIO, NexentaStor or Redcurrant. Some of them got acquired or disappeared and create issue for users but for sure opportunities for other players watching that space carefully. Some initiatives exist to secure software if some doubt about future appear on the table. The SDS wave, a bit like the LMAP (Linux, MySQL, Apache web server and PHP) had a serious impact of commercial software as well as several open source players or solutions jumped into that generating a significant pricing erosion. This initiative, good for users, continues to reduce also differentiators among players and it became tougher to notice differences. In addition, Internet giants played a major role in open source development. They have talent, large teams, time and money and can spend time developing software that fit perfectly their need. They also control communities acting in such way as they put seeds in many directions. The other reason is the difficulty to find commercial software that can scale to their need. In other words, a commercial software can scale to the large corporation needs but reaches some limits for a large internet player. Historically these organizations really redefined scalability objectives with new designs and approaches not found or possible with commercial software. We all have example in mind and in storage Google File System is a classic one or Haystack at Facebook. Also large vendors with internal projects that suddenly appear and donated as open source to boost community effort and try to trigger some market traction and partnerships, this is the case of Intel DAOS. Open source is immediately associated with various licenses models and this is the complex aspect about source code as it continues to create difficulties for some people and entities that impact projects future. One about ZFS or even Java were well covered in the press at that time. We invite readers to check their preferred page for that or at least visit the Wikipedia one or this one with the full table on the appendix page. Immediately associated with licenses are the communities, organizations or foundations and we can mention some of them here as the list is pretty long: Apache Software Foundation, Cloud Native Computing Foundation, Eclipse Foundation, Free Software Foundation, FreeBSD Foundation, Mozilla Foundation or Linux Foundation … and again Wikipedia represents a good source to start.
Open Source Definitely Changed Storage Industry - StorageNewsletter
0 notes
henrynolastname · 6 years ago
Text
Ansible Roles (from Galaxy)
# all ansible-galaxy install geerlingguy.ntp ansible-galaxy install geerlingguy.clamav ansible-galaxy install geerlingguy.firewall ansible-galaxy install geerlingguy.security ansible-galaxy install geerlingguy.git ansible-galaxy install geerlingguy.munin-node # all centos only - repo ansible-galaxy install geerlingguy.repo-epel ansible-galaxy install geerlingguy.repo-remi # ansible ansible-galaxy install geerlingguy.ansible ansible-galaxy install geerlingguy.ansible-role-packer # awx (redhat ansible tower) ansible-galaxy install geerlingguy.awx # puppet ansible-galaxy install geerlingguy.puppet # docker ansible-galaxy install geerlingguy.docker ansible-galaxy install geerlingguy.awx-container # kubernetes ansible-galaxy install geerlingguy.kubernetes # apache ansible-galaxy install geerlingguy.apache ansible-galaxy install geerlingguy.apache-php-fpm # nginx ansible-galaxy install geerlingguy.nginx # haproxy ansible-galaxy install geerlingguy.haproxy # nfs ansible-galaxy install geerlingguy.nfs # samba ansible-galaxy install geerlingguy.samba # daemonize, running commands as a Unix daemon ansible-galaxy install geerlingguy.daemonize # mailHog SMTP server ansible-galaxy install geerlingguy.mailhog # postfix mail transfer agent MTA ansible-galaxy install geerlingguy.postfix # gitlab ansible-galaxy install geerlingguy.gitlab # glusterfs ansible-galaxy install geerlingguy.glusterfs # munin ansible-galaxy install geerlingguy.munin # let's encrypt ansible-galaxy install geerlingguy.certbot # java development ansible-galaxy install geerlingguy.java # node.js development ansible-galaxy install geerlingguy.nodejs # pip development ansible-galaxy install geerlingguy.pip # php ansible-galaxy install geerlingguy.php ansible-galaxy install geerlingguy.php-mysql ansible-galaxy install geerlingguy.php-pgsql ansible-galaxy install geerlingguy.php-memcached ansible-galaxy install geerlingguy.php-pear ansible-galaxy install geerlingguy.php-pecl ansible-galaxy install geerlingguy.php-versions # ruby ansible-galaxy install geerlingguy.ruby # jenkins ansible-galaxy install geerlingguy.jenkins # mysql database ansible-galaxy install geerlingguy.mysql # postgresql database ansible-galaxy install geerlingguy.postgresql # stackstorm  Iike (IFTTT) ansible-galaxy install stackstorm.stackstorm # elasticstack elasticsearch ansible-galaxy install geerlingguy.elasticsearch # elasticstack kibana ansible-galaxy install geerlingguy.kibana # elasticstack logstash ansible-galaxy install geerlingguy.logstash
0 notes
masaa-ma · 6 years ago
Text
KubernetesのConfig&Storageリソース(その2)
from https://thinkit.co.jp/article/14195
Config&Storageリソース
前回、利用者が直接利用するConfig&Storageリソースは3種類あることを紹介し、そのうちのSecretとConfigMapを解説しました。今回は、残る1種類であるPersistentVolumeClaimについて解説しますが、PersistentVolumeClaimを理解するために必要となるPersistentVolume、Volumeについても取り上げます。
5種類に大別できるKubernetesのリソース
リソースの分類内容Workloadsリソースコンテナの実行に関するリソースDiscovery&LBリソースコンテナを外部公開するようなエンドポイントを提供するリソースConfig&Storageリソース設定・機密情報・永続化ボリュームなどに関するリソースClusterリソースセキュリティやクォータなどに関するリソースMetadataリソースリソースを操作する系統のリソース
VolumeとPersistentVolumeとPersistentVolumeClaimの違い
Volumeは既存のボリューム(ホストの領域、NFS、Ceph、GCP Volume)などをYAML Manifestに直接指定することで利用可能にするものです。そのため、利用者が新規でボリュームを作成したり、既存のボリュームを削除したりといった操作を行うことはできません。また、YAML ManifestからVolumeリソースを作成するといった処理も行いません。
一方でPersistentVolumeは、外部の永続ボリュームを提供するシステムと連携して、新規のボリュームの作成や、既存のボリュームの削除などを行うことが可能です。具体的には、YAML ManifestなどからPersistent Volumeリソースを別途作成する形になります。
PersistentVolumeにもVolumeにも同じプラグインが用意されています。例えばGCPやAWSのボリュームサービスでは、Persistent VolumeプラグインとVolumeプラグインの両方が用意されています。Persistent Volumeプラグインの方ではVolumeの作成と削除といったライフサイクルを処理することが可能(PersistentVolumeClaimを利用すればDynamic Provisioningも可能)ですが、Volumeプラグインの場合はすでにあるVolumeを利用することだけが可能です。
PersistentVolumeClaimは、その名のとおり作成されたPersistentVolumeリソースの中からアサインするためのリソースになります。Persistent Volumeはクラスタにボリュームを登録するだけなので、実際にPodから利用するにはPersistent Volume Claimを定義して利用する必要があります。また、Dynamic Provisioning機能(あとで解説します)を利用した場合は、Persistent Volume Claimが利用されたタイミングでPersistent Volumeを動的に作成することが可能なため、順番が逆に感じられるかもしれません。
Volume
Kubernetesでは、Volumeを抽象化してPodと疎結合なリソースとして定義しています。Volumeプラグインとして、下記のような様々なプラグインが提供されています。下記のリスト以外にもあるので、詳細は「https://kubernetes.io/docs/concepts/storage/volumes/」を確認して下さい。
EmptyDir
HostPath
nfs
iscsi
cephfs
GCPPersistentVolume
gitRepo
PersistentVolumeとは異なり、Podに対して静的に領域を指定するような形になるため、競合などに注意してください。
EmptyDir
EmptyDirはPod用の一時的なディスク領域として利用可能です。PodがTerminateされると削除されます。
Tumblr media
EmptyDirのイメージ図
リスト1:EmptyDirを指定するemptydir-sample.yml
apiVersion: v1 kind: Pod metadata: name: sample-emptydir spec: containers: - image: nginx:1.12 name: nginx-container volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {} $ kubectl apply -f emptydir-sample.yml
HostPath
HostPathは、Kubernetes Node上の領域をコンテナにマッピングするプラグインです。typeにはDirectory、DirectoryOrCreate、File、Socket、BlockDeviceなどから選択します。DirectoryOrCreateとDirectoryとの差は、ディレクトリが存在しない場合に作成して起動するか否かの違いです。
Tumblr media
[HostPathのイメージ図
リスト2:HostPathを使用するhostpath-sample.yml
apiVersion: v1 kind: Pod metadata: name: sample-hostpath spec: containers: - image: nginx:1.12 name: nginx-container volumeMounts: - mountPath: /srv name: hostpath-sample volumes: - name: hostpath-sample hostPath: path: /data type: DirectoryOrCreate $ kubectl apply -f hostpath-sample.yml
PersistentVolume(PV)
PersistentVolumeは、永続化領域として確保されるVolumeです。前述のVolumeはPodの定義内に直接書き込む形で接続を行っていましたが、Persistent Volumeはリソースとして個別に作成してから利用します。すなわち、YAML Manifestを使ってPersistent Volumeリソースを作成する必要があります。
また、PersistentVolumeは厳密にはConfig&StorageリソースではなくClusterリソースに分類されますが、今回は説明の都合上この章で説明しています。
PersistentVolumeの種類
PersistentVolumeは、基本的にネットワーク越しでディスクをアタッチするタイプのディスクとなります。シングルノード時のテスト用としてHostPathが提供されていますが、PersistentVolumeとしては実用的ではありません。PersistentVolumeはPluggableな構造となっており、一例として下記のようなものがあります。下記のリスト以外にもあるので、詳細は「https://kubernetes.io/docs/concepts/storage/persistent-volumes/」を確認して下さい。
GCE Persistent Disk
AWS Elastic Block Store
NFS
iSCSI
Ceph
OpenStack Cinder
GlusterFS
PersistentVolumeの作成
PersistentVolumeを作成する際には、下記のような項目を設定します。
ラベル
容量
アクセスモード
Reclaim Policy
マウントオプション
Storage Class
PersistentVolumeごとの設定
リスト3:PersistentVolumeを作成するpv_sample.yml
apiVersion: v1 kind: PersistentVolume metadata: name: sample-pv labels: type: nfs environment: stg spec: capacity: storage: 10G accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: slow mountOptions: - hard nfs: server: xxx.xxx.xxx.xxx path: /nfs/sample $ kubectl create -f pv_sample.yml
作成後に確認すると、正常に作成できたためBoundステータスになっていることが確認できます。
リスト4:PersistentVolumeの状態を確認
$ kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE sample-pv 10Gi RWX Retain Bound 6s
以下、PersistentVolumeの設定項目について解説します。
ラベル
Dynamic Provisioningを使わずにPersistentVolumeを作成していく場合、PersistentVolumeの種類がわからなくなってしまうため、type、environment、speedなどのラベルをつけていくことをお勧めします。ラベルをつけていない場合、既存のPersistentVolumeの中から自動的に割り当てを行う必要が生じるため、ユーザの意志によるディスクのアタッチが困難になります。
Tumblr media
ラベルをつけなかった場合
一方でラベルをつけておくと、PersistentVolumeClaimでボリュームのラベルを指定できるで、スケジューリングを柔軟に行えます。
Tumblr media
ラベルをつけた場合
容量
容量を指定します。ここで注意するべきポイントは、Dynamic Provisioningが利用できない環境では、小さめのPersistentVolumeも用意することです。例えば下の図のよう状況で、PersistentVolumeClaimからの要求が3GBだった場合、用意されているPersistent Volumeの中から最も近い容量である10GBのものが割り当てられることになります。
Tumblr media
PersistentVolumeClaimとPersistentVolumeの実際の容量の相違
アクセスモード
アクセスモードは3つ存在しています。
PersistentVolumeのアクセスモード
モード内容ReadWriteOnce(RWO)単一ノードからRead/WriteされますReadOnlyMany(ROX)単一ノードからWrite、複数ノードからReadされますReadWriteMany(RWX)複数ノードからRead/Writeされます
PersistentVolumeによって、サポートしているアクセスモードは異なります。またGCP、AWS、OpenStackで提供されるブロックストレージサービスでは、ReadWriteOnceのみサポートされています。詳細は、https://kubernetes.io/docs/concepts/storage/persistent-volumes/を確認して下さい。
Reclaim Policy
Reclaim Policyは、PersistentVolumeを利用し終わった後の処理方法(破棄するか、再利用するかなど)を制御するポリシーです。
Retain
PersistentVolumeのデータも消さずに保持します
また、他のPersistentVolumeClaimによって、このPersistentVolumeが再度マウントされることはありません
Recycle
PersistentVolumeのデータを削除(rm -rf ./*)し、再利用可能時な状態にします
他のPersistentVolumeClaimによって再度マウントされます。
Delete
PersistentVolumeが削除されます
GCE、AWS、OpenStackなどで確保される外部ボリュームの際に利用されます
マウントオプション
PersistentVolumeの種別によっては、追加でマウントオプションを指定することが可能です。詳しくは各Persistent Volumeの仕様を確認して下さい。
Storage Class
Dynamic Provisioningの場合にユーザがPersistentVolumeClaimを使ってPersistentVolumeを要求する際に、どういったディスクが欲しいのかを指定するために利用されます。Storege Classの選択=外部ボリュームの種別選択となります。
例えばOpenStack Cinderの場合には、ボリュームを切り出す際に、どのバックエンド(Ceph、ScaleIO、Xtremioなど)か、どのゾーンかなどを選択可能です。
リスト5:Storage Classを指定するstorageclass_sample.yml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sample-storageclass parameters: availability: test-zone-1a type: scaleio provisioner: kubernetes.io/cinder
PersistentVolume Pluginごとの設定
今回はNFSの例にしましたが、実際の設定項目はPersistentVolume Pluginごとに異なります。例えばspec.nfsは、GlusterFS Pluginを利用した場合には利用されません。
https://thinkit.co.jp/sites/default/files/main_images/14195_main.jpg
0 notes
andrey-v-maksimov · 8 years ago
Text
Kubernetes: постоянные диски на GlusterFS и heketi
Kubernetes: постоянные диски на GlusterFS и heketi
В прошлой статье “Kubernetes: использование совместно с GlusterFS” мы рассмотрели ручной процесс создания кластера GlusterFS с последующим подключением его к вашему кластеру Kubernetes в виде отдельного StorageClass-а для предоставления вашим приложениям возможности использовать постоянные диски (PersistentVolume). В этой статье речь пойдет о той же, но более автоматизированной и простой…
View On WordPress
0 notes
xenonstack-blog · 8 years ago
Text
Deploying PostgreSQL on Kubernetes
Tumblr media
What is PostgreSQL?
PostgreSQL is a powerful, open source Relational Database Management System. PostgreSQL is not controlled by any organization or any individual. Its source code is available free of charge. It is pronounced as "post-gress-Q-L".
PostgreSQL has earned a strong reputation for its reliability, data integrity, and correctness.
It runs on all major operating systems, including Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, MacOS, Solaris, Tru64), and Windows.
It is fully ACID compliant, has full support for foreign keys, joins, views, triggers, and stored procedures (in multiple languages).
It includes most SQL:2008 data types, including INTEGER, NUMERIC, BOOLEAN, CHAR, VARCHAR, DATE, INTERVAL, and TIMESTAMP.
It also supports storage of binary large objects, including pictures, sounds, or video.
It has native programming interfaces for C/C++, Java, .Net, Perl, Python, Ruby, Tcl, ODBC, among others, and exceptional documentation.
Prerequisites
To follow this guide you need -
Kubernetes Cluster
GlusterFS Cluster
Step 1 - Create a PostgreSQL Container Image
Create a file name “Dockerfile” for PostgreSQL. This image contains our custom config dockerfile which will look like -
FROM ubuntu:latest MAINTAINER XenonStack RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8 RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main" > /etc/apt/sources.list.d/pgdg.list RUN apt-get update && apt-get install -y python-software-properties software-properties-common postgresql-9.6 postgresql-client-9.6 postgresql-contrib-9.6 RUN /etc/init.d/postgresql start &&\ psql --command "CREATE USER root WITH SUPERUSER PASSWORD 'xenonstack';" &&\ createdb -O root xenonstack RUN echo "host all  all 0.0.0.0/0  md5" >> /etc/postgresql/9.6/main/pg_hba.conf RUN echo "listen_addresses='*'" >> /etc/postgresql/9.6/main/postgresql.conf # Expose the PostgreSQL port EXPOSE 5432 # Add VOLUMEs to allow backup of databases VOLUME  ["/var/lib/postgresql"] # Set the default command to run when starting the container CMD ["/usr/lib/postgresql/9.6/bin/postgres", "-D", "/var/lib/postgresql", "-c", "config_file=/etc/postgresql/9.6/main/postgresql.conf"]
This Postgres image has a base image of ubuntu xenial. After that, we create Super User and default databases. Exposing 5432 port will help external system to connect the PostgreSQL server.
Step 2 - Build PostgreSQL Docker Image
$ docker build -t dr.xenonstack.com:5050/postgres:v9.6
Step 3 - Create a Storage Volume (Using GlusterFS)
Using below-mentioned command create a volume in GlusterFS for PostgreSQL and start it. As we don’t want to lose our PostgreSQL Database data just because a Gluster server dies in the cluster, so we put replica 2 or more for higher availability of data.
$ gluster volume create postgres-disk replica 2 transport tcp k8-master:/mnt/brick1/postgres-disk  k8-1:/mnt/brick1/postgres-disk $ gluster volume start postgres-disk $ gluster volume info postgres-disk
Tumblr media
Step 4 - Deploy PostgreSQL on Kubernetes
Deploying PostgreSQL on Kubernetes have following prerequisites -
Docker Image: We have created a Docker Image for Postgres in Step 2
Persistent Shared Storage Volume: We have created a Persistent Shared Storage Volume in Step 3
Deployment & Service Files: Next, we will create Deployment & Service Files
Create a file name “deployment.yml” for PostgreSQL. This deployment file will look like -
apiVersion: extensions/v1beta1 kind: Deployment metadata:  name: postgres  namespace: production spec:  replicas: 1  template: metadata:   labels:   k8s-app: postgres spec:   containers:   - name: postgres     image: dr.xenonstack.com:5050/postgres:v9.6     imagePullPolicy: "IfNotPresent"     ports:     - containerPort: 5432     env:     - name: POSTGRES_USER       value: postgres     - name: POSTGRES_PASSWORD       value: superpostgres     - name: PGDATA       value: /var/lib/postgresql/data/pgdata     volumeMounts:       - mountPath: /var/lib/postgresql/data         name: postgredb   volumes:     - name: postgredb       glusterfs:         endpoints: glusterfs-cluster         path: postgres-disk         readOnly: false
Continue Reading The Full Article At - XenonStack.com/Blog
1 note · View note
faizrashis1995 · 5 years ago
Text
How ThoughtSpot Uses Kubernetes for Dev Infrastructure
Kubernetes is one of the hottest open-source projects these days. It’s a production-grade container orchestration system, inspired by Google’s own Borg and released into the wild in 2014. Thousands of developers joined the project since then, and now it’s becoming an industry standard for running containerized applications. Kubernetes is designed to run production workloads on a scale, but it’s capable of much more. In this article, I’ll talk about my experience setting up a Kubernetes cluster as a core component of a development infrastructure while working at ThoughtSpot.
Context
 ThoughtSpot is developing a sophisticated BI system for large enterprises, which runs on top of another Borg-inspired orchestration system called Orion. It was designed internally, at the times when neither Docker nor Kubernetes were publicly available.
 A few things to know about ThoughtSpot, which are relevant to this article are:
 The system consists of a few dozens of services and the overhead of running them all is quite massive. Idling system with very little data requires 20–30Gb of RAM, 4 CPU cores and 2–3 minutes to start up.
ThoughtSpot sells its own appliance, typically with at least 1TB RAM per-cluster, so that 20–30Gb overhead is not a problem for the product. However, it’s quite an issue for the dev infrastructure.
There’s a lot of retired hardware available for developers in the office.
Motivation
 I was initially assigned to solve an easy-sounding problem: make integration tests faster. There were a few hundreds of Selenium-based workflows, which were running sequentially and taking up to 10 hours to complete. The obvious solution was to parallelize them. The problem was that they were not designed to run concurrently and hence we had to either refactor all tests or provide an isolated copy of the ThoughtSpot system (a test backend) for every thread to run on. Redesigning tests might look like a cleaner solution, but it would require a tremendous effort from the whole engineering team and a lot of test-related changes in the product, so it was not feasible. We’ve decided to take the second approach, and that left me with the task, I’ve ended up solving with the help of Docker and Kubernetes: make it possible to quickly (in 2–3 minutes) spin up dozens of test backends with pre-loaded test data, run tests, tear them down, repeat.
 The path
With this task in mind, I started looking for options. Actually, some infrastructure was already in place: we had a VMware cluster running on four servers. Current integration tests were already using it for provisioning test backends, but there were problems:
 It could only sustain about one hundred VMs, after that we would have to buy more of the expensive proprietary hardware. It was already utilized for about 80% by other workflows in the company.
Cloning 10 or more VMs in parallel was blowing up the IO. It would have to move around ~500Gb of disk snapshots, and it was taking forever.
VMs were taking way more than 2–3 minutes to start up.
Virtualization wasn’t a viable option for us, so we turned our heads to containers. In early 2016 we were looking at two main options: LXC/LXD and Docker. Docker was an already recognized leader, and LXD 2.0 was only going to be released along with Ubuntu 16.04. However, Docker has a strong bias towards small, single-process containers, which have to talk to each other over the network and form a complete system in this way. LXD, on the other hand, offered something, that looked more like familiar VMs, with the initsystem and the ability to run multiple services inside a single container. With Docker, we had to either compromise on the cleanliness and use in an “LXD way” or re-factor the whole system to make it run on top of Docker, which was not feasible. On the other hand, with LXD we could not rely on the exhaustive community knowledge, as well as the documentation, that Docker had. Still, we’ve decided to give it a shot.
 LXC/LXD
 I took four machines, each with 256Gb RAM, 40 CPU cores, 2 SSDs and 4 HDDs, installed LXD and configured a ZFS pool on each node. I’ve then set up a Jenkins job, that would build the project, install it inside the LXD container on one of these machines, export an image and push it to the other three nodes. Each integration test job would then just do lxd clone current-master-snapshot <backend-$i>, run the tests and destroy containers once done. Because of the copy-on-write nature of ZFS, clone operation was now instantaneous. Each node was able to handle about ten test backends until things would start crashing. This was a great result, much better than what VMware was giving us, but with a major drawback: it wasn’t flexible nor scalable. Each test job would need to know exactly on which of LXD nodes to create its backends and, if it required more than 10 of them, they just wouldn’t fit. In other words, without the orchestration system, it was not scalable. With LXD, at that time, we had only two options: use OpenStack or write our own scheduler (which we didn’t want to write).
 OpenStack supports LXD as a compute backend, but in 2016 it was all very fresh, barely documented and barely working. I’ve spent about a week trying to configure an OpenStack cluster and then gave up. Luckily, we had another unexplored path: Docker and Kubernetes.
 Docker & Kubernetes
 After the first pass over documentation, it was clear that neither Docker nor Kubernetes philosophy fit our use case. Docker explicitly said that “Containers are not VMs”, and Kubernetes was designed for running one (or few) application, consisted of many small containerized services, rather than many fat single-container apps. On the other hand, we felt that the movement behind Kubernetes was powerful. It’s a top-tier open-source product with an active community, and it can (should) eventually replace our own, home-grown, orchestration system in the product. So, all the knowledge that we acquire while fitting Kubernetes for the dev infrastructure needs we can reuse later when migrating the main product to Kubernetes. With that in mind, we dove into building the new infrastructure.
 We couldn’t get rid of the Systemd dependency in our product, so we’ve ended up packaging everything into a CentOS 7 based container with the Systemd as a top-level process. Here’s the base image Dockerfile that worked for us. We’ve made a very heavy Docker image (20Gb initially, 5 after some optimizations), which encapsulates Orion (ThoughtSpots own container engine), which then runs 20+ of ThoughtSpot services in cgroup containers, and that all roughly corresponds to a single node production setup. It was cumbersome, but it was the quickest way from nothing to something usable.
 After that, I took a few other physical machines and created our first Kubernetes cluster on them. Among all of the Kubernetes abstractions, only Pod was relevant to our problem, as it’s really just a container running somewhere. For most of our test cases, we would need to create multiple Podsand having the ability to group them by workload would be helpful. Perhaps labels are better suited for this purpose, but we’ve decided to exploit a ReplicationController. ReplicationController is an abstraction that would create a number of Pods (according to a Replication Factor), make sure they are always alive and, on the other end, receive traffic from a Service and redistribute it across the Pods. ReplicationController assumes that every Pod is equal and stateless so that every new Service connection can be routed to a random Pod. In our case, we did not create a Service and just used ReplicationController as a way to group Pods and make sure they get automatically re-created if anything dies. Every test job would then create a ReplicationController for itself and just use the underlying Pods directly.
 Pod networking hack
 We rely on Pods behaving as real VMs API-wise. In particular, we needed SSH access to every Pod and an ability to talk to dynamically allocated ports. Also, every Pod was obviously stateful, as the image encapsulated the state store in it. This effectively meant that instead of using Services and load balancing through kube-proxy, we had to break into the pod-network directly. We’ve done that by enabling ip forwarding on the Kubernetes master node (turning it into a router) and re-configuring all office routers to route 172.18.128.0/16(our pod-network) through the Kubernetes master node. This is a terrible hack which should never be done in production environments, but it allowed us to quick-start the dev infrastructure, solve the immediate problem and start looking into ways how to make our product Kubernetes-ready in the future.
 ·         Kubernetes cluster is running on 20 physical machines, providing 7 Tb of RAM and 928 CPU cores combined.
·         Every host node is running CentOS 7 with 4.4-lt Linux kernel.
·         We use Weave as an overlay network, and the routing hack is still in place.
·         We run an in-house Docker registry, to which CI pipeline uploads a product image every time the master or release branch build succeeds.
·         We use Jenkins Kubernetes plugin to provision Jenkins slaves on Kubernetes dynamically.
·         We’ve recently deployed Glusterfs on a few nodes and started experimenting with persistent stateful services. This and this were the essential tutorials.
MAAS
 During this project, we’ve discovered another great open-source tool, which helped us a lot with managing physical hardware. It’s called MAAS and translates as “Metal as a Service.”
It’s a tool, which leverages PXE booting and remote node control to allow dynamic node re-imaging with an arbitrary OS image. On the user side, it provides a REST API and a nice UI, so that you can provision physical machines in AWS style, without actually touching the hardware. It requires some effort to set it up initially, but after it’s there, the whole physical infrastructure becomes almost as flexible as the cloud.
Right now we provision plain CentOS 7 nodes through MAAS and then run an Ansible script, which upgrades the kernel, installs all the additional software and adds the node to a Kubernetes cluster. (link to a gist)
 Nebula
 Most of the developers or CI jobs do not interact with MAAS or Kubernetes directly. We have another custom layer on top of that, which aggregates all available resources together and provides a single API and UI for allocating them. It’s called Nebula, and it can create and destroy test backends on Kubernetes, as well as on the old VMware infrastructure, AWS, or physical hardware (through MAAS). It also implements the concept of a lease: every resource provisioned is assigned to a person or a CI job for a certain time. When the lease expires, the resource is automatically reclaimed or cleaned up.
 LXCFS
 By default, Docker mounts /proc/ filesystem from the host and hence /proc/stat (meminfo, cpuinfo, etc) do not reflect container-specific information. Especially, they do not reflect any resource quotas set on cgroup. Some processes in our product and CI pipeline check for the total RAM available and allocate its own memory accordingly. If the process doesn’t check the limit from cgroup, it could easily allocate more memory than allowed by a container quota, and then get killed by the OOM killer. In particular, this was happening with a lot the JS uglifier, which we were running as part of the product build. The problem is described and discussed here, and one of the solutions for it is to use LXCFS.
 LXCFS is a small FUSE filesystem written with the intention of making Linux containers feel more like a virtual machine. It started as a side-project of LXC but is useable by any runtime.
LXCFS will take care that the information provided by crucial files in procfs such as:
 ·         /proc/cpuinfo
·         /proc/diskstats
·         /proc/meminfo
·         /proc/stat
·         /proc/swaps
·         /proc/uptime
are container aware such that the values displayed (e.g. in /proc/uptime) really reflect how long the container is running and not how long the host is running.
Conclusion
 It took us quite a lot of time to figure all the things out. There was a tremendous lack of documentation and community knowledge at the beginning when we were just starting with Kubernetes 1.4. We were scraping the particles of information from all over the web and learning by debugging. We’ve also made dozens of changes to our product, re-designed the CI pipeline and tried many other things which are not mentioned in the article. In the end, however, it all played out well, and Kubernetes became a cornerstone of dev infrastructure in ThoughtSpot, providing much needed flexibility and allowing to utilize all the existing hardware, available in the office. I left the company in September, but the project got handed over to other developers and keeps evolving. I know that many people are trying to build something similar for their companies, so I would be happy to answer any questions in the comments below.[Source]-https://www.thoughtspot.com/codex/how-thoughtspot-uses-kubernetes-dev-infrastructure
Basic & Advanced Kubernetes Training using cloud computing, AWS, Docker etc. in Mumbai. Advanced Containers Domain is used for 25 hours Kubernetes Training.
0 notes
virtualizationhowto · 12 days ago
Text
GlusterFS vs Ceph: Two Different Storage Solutions with Pros and Cons
GlusterFS vs Ceph: Two Different Storage Solutions with Pros and Cons @vexpert #vmwarecommunities #ceph #glusterfs #glusterfsvsceph #cephfs #containerstorage #kubernetesstorage #virtualization #homelab #homeserver #docker #kubernetes #hci
I have been trying out various storage solutions in my home lab environment over the past couple of months or so. Two that I have been extensively testing are GlusterFS vs Ceph, and specifically GlusterFS vs CephFS to be exact, which is Ceph’s file system running on top of Ceph underlying storage. I wanted to give you a list of pros and cons of GlusterFS vs Ceph that I have seen in working with…
0 notes
fbreschi · 6 years ago
Text
Increase GlusterFS volume size in Kubernetes
http://bit.ly/2VLwinu
0 notes
computingpostcom · 2 years ago
Text
The Dynamic volume provisioning in Kubernetes allows storage volumes to be created on-demand, without manual Administrator intervention. When developers are doing deployments without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, from where the PersistentVolumes are created. This guide will discuss how you can achieve Dynamic Volume Provisioning on Kubernetes by using GlusterFS distributed storage solution and Heketi RESTful management interface. It is expected you have deployed Heketi and GlusterFS scale-out network-attached storage file system. For Ceph, check: Ceph Persistent Storage for Kubernetes with Cephfs Persistent Storage for Kubernetes with Ceph RBD How Dynamic Provisioning is configured in Kubernetes In Kubernetes, dynamic volume provisioning is based on the API object StorageClass from the API group storage.k8s.io. As a cluster administrator, you’ll define as many StorageClass objects as needed, each specifying a volume plugin ( provisioner) that provisions a volume and the set of parameters to pass to that provisioner when provisioning. So below are the steps you’ll use to configure Dynamic Volume Provisioning on Kubernetes using Gluster and Heketi API. Setup GlusterFS and Heketi It is expected you have a running Gluster and Heketi before you continue with configurations on the Kubernetes end. Refer to our guide below on setting them up. Setup GlusterFS Storage With Heketi on CentOS 8 / CentOS 7 At the moment we only have guide for CentOS, but we’re working on a deployment guide for Ubuntu/Debian systems. For containerized setup, check: Setup Kubernetes / OpenShift Dynamic Persistent Volume Provisioning with GlusterFS and Heketi Once the installation is done, proceed to step 2: Create StorageClass Object on Kubernetes We need to create a StorageClass object to enable dynamic provisioning for container platform users. The StorageClass objects define which provisioner should be used and what parameters should be passed to that provisioner when dynamic provisioning is invoked. Check your Heketi Cluster ID $ heketi-cli cluster list Clusters: Id:b182cb76b881a0be2d44bd7f8fb07ea4 [file][block] Create Kubernetes Secret Get a base64 format of your Heketi admin user password. $ echo -n "PASSWORD" | base64 Then create a secret with the password for accessing Heketi. $ vim gluster-secret.yaml apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default type: "kubernetes.io/glusterfs" data: # echo -n "PASSWORD" | base64 key: cGFzc3dvcmQ= Where: cGFzc3dvcmQ= is the output of echo command. Create the secret by running the command: $ kubectl create -f gluster-secret.yaml Confirm secret creation. $ kubectl get secret NAME TYPE DATA AGE heketi-secret kubernetes.io/glusterfs 1 1d Create StorageClass Below is a sample StorageClass for GlusterFS using Heketi. $ cat glusterfs-sc.yaml kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: gluster-heketi provisioner: kubernetes.io/glusterfs reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: true parameters: resturl: "http://heketiserverip:8080" restuser: "admin" secretName: "heketi-secret" secretNamespace: "default" volumetype: "replicate:2" volumenameprefix: "k8s-dev" clusterid: "b182cb76b881a0be2d44bd7f8fb07ea4" Where: gluster-heketi is the name of the StorageClass to be created. The valid options for reclaim policy are Retain, Delete or Recycle. The Delete policy means that a dynamically provisioned volume is automatically deleted when a user deletes the corresponding PersistentVolumeClaim. The volumeBindingMode field controls when volume binding and dynamic provisioning should occur.
Valid options are Immediate & WaitForFirstConsumer. The Immediate mode indicates that volume binding and dynamic provisioning occurs once the PersistentVolumeClaim is created. The WaitForFirstConsumer mode delays the binding and provisioning of a PersistentVolume until a Pod using the PersistentVolumeClaim is created. The resturl is the URL of your heketi endpoint heketi-secret is the secret created for Heketi credentials. default is the name of namespace where secret was created replicate:2 indicated the default replication factor for Gluster Volumes created. For more HA, use 3. volumenameprefix: By default dynamically provisioned volumes have the naming schema of vol_UUID format. We have provided a desired volume name from storageclass. So the naming scheme will be: volumenameprefix_Namespace_PVCname_randomUUID b182cb76b881a0be2d44bd7f8fb07ea4 is the ID of the cluster obtained from the command heketi-cli cluster list Another parameter that can be set is: volumeoptions: "user.heketi.zone-checking strict" The default setting/behavior is: volumeoptions: "user.heketi.zone-checking none" This forces Heketi to strictly place replica bricks in different zones. The required minimum number of nodes required to be present in different zones is 3 if the replica value is set to 3. Once the file is created, run the following command to create the StorageClass object. $ kubectl create -f gluster-sc.yaml Confirm StorageClass creation. $ kubectl get sc NAME PROVISIONER AGE glusterfs-heketi kubernetes.io/glusterfs 1d local-storage kubernetes.io/no-provisioner 30d Step 2: Create PersistentVolumeClaim Object When a user is requesting dynamically provisioned storage, a storage class should be included in the PersistentVolumeClaim. Let’s create a 1GB request for storage: $ vim glusterfs-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster-pvc annotations: volume.beta.kubernetes.io/storage-class: gluster-heketi spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi Create object: $ kubectl create --save-config -f glusterfs-pvc.yaml Confirm: $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE glusterfs-pvc Bound pvc-34b9b5e9-fbde-11e9-943f-00505692ee7e 1Gi RWX glusterfs-heketi 1d After creation, you can use it in your deployments. To use the volume we reference the PVC in the YAML file of any Pod/Deployment like this for example: apiVersion: v1 kind: Pod metadata: name: gluster-pod labels: name: gluster-pod spec: containers: - name: gluster-pod image: busybox command: ["sleep", "60000"] volumeMounts: - name: gluster-vol mountPath: /usr/share/busybox readOnly: false volumes: - name: gluster-vol persistentVolumeClaim: claimName: glusterfs-pvc That’s it for today. You should have a working Dynamic Volume Provisioning With Heketi & GlusterFS for your Kubernetes platform.
0 notes
hiro49 · 6 years ago
Text
Kubernetesボリュームプラグインの比較&まとめ [GlusterFS] on @Qiita https://t.co/EF5ELcEOjG
Kubernetesボリュームプラグインの比較&まとめ [GlusterFS] on @Qiita https://t.co/EF5ELcEOjG
— m (@m3816) January 8, 2019
from Twitter https://twitter.com/m3816 January 09, 2019 at 01:28AM
0 notes
ericvanderburg · 6 years ago
Text
Debugging Kubernetes: Common Errors When Using GlusterFS for Persistent Volumes
http://i.securitythinkingcap.com/QpFyBD
0 notes
virtualizationhowto · 29 days ago
Text
CephFS for Docker Container Storage
CephFS for Docker Container Storage @vexpert #vmwarecommunities #ceph #cephfs #dockercontainers #docker #kubernetes #dockerswarm #homelab #homeserver
Given that I have been trying Ceph recently for Docker container storage: see my post on that topic here, I wanted to see if I could effectively use CephFS for Docker container storage. If you have been following along in my, hopefully helpful, escapades, you know that I have also tried out GlusterFS recently as well. However, with it being deprecated now, I wanted to steer towards a solution…
0 notes
computingpostcom · 2 years ago
Text
In this guide, you’ll learn to install and configure GlusterFS Storage on CentOS 8 / CentOS 7 with Heketi. GlusterFS is a software defined, scale-out storage solution designed to provide affordable and flexible storage for unstructured data. GlusterFS allows you to unify infrastructure and data storage while improving availability performance and data manageability.   GlusterFS Storage can be deployed in the private cloud or datacenter or in your in-Premise datacenter. This is done purely on commodity servers and storage hardware resulting in a powerful, massively scalable, and highly available NAS environment. Heketi Heketi provides a RESTful management interface which can be used to manage the lifecycle of GlusterFS Storage volumes. This allows for easy integration of GlusterFS with cloud services like OpenShift, OpenStack Manila and Kubernetes for dynamic volumes provisioning. Heketi will automatically determine the location for bricks across the cluster, making sure to place bricks and its replicas across different failure domains. Environment Setup Our setup of GlusterFS on CentOS 8 / CentOS 7 systems will comprise of below. CentOS 8 / CentOS 8 Linux servers GlusterFS 6 software release Three GlusterFS Servers Each server has three disks (@10GB) DNS resolution configured – You can use /etc/hosts file if you don’t have DNS server User account with sudo or root user access Heketi will be installed in one of the GlusterFS nodes. Under the /etc/hosts file of each server, I have: $ sudo vim /etc/hosts 10.10.1.168 gluster01 10.10.1.179 gluster02 10.10.1.64 gluster03 Step 1: Update all servers Ensure all servers that will be part of the GlusterFS storage cluster are updated. sudo yum -y update Since there may be kernel updates, I recommend you reboot your system. sudo reboot Step 2: Configure NTP time synchronization You need to synchronize time across all GlusterFS Storage servers using the Network Time Protocol (NTP) or Chrony daemon. Refer to our guide below. Setup Chrony Time synchronization on CentOS Step 3: Add GlusterFS repository Download GlusterFS repository on all servers. We’ll do GlusterFS 6 in this setup since it is the latest stable release. CentOS 8: sudo yum -y install wget sudo wget -O /etc/yum.repos.d/glusterfs-rhel8.repo https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/CentOS/glusterfs-rhel8.repo CentOS 7: sudo yum -y install centos-release-gluster6 Once you’ve added the repository, update your YUM index. sudo yum makecache Step 3: Install GlusterFS on CentOS 8 / CentOS 7 Installation of GlusterFS on CentOS 8 differs from CentOS 7 installation. Install GlusterFS on CentOS 8 Enable PowerTools repository sudo dnf -y install dnf-utils sudo yum-config-manager --enable PowerTools sudo dnf -y install glusterfs-server Install GlusterFS on CentOS 7 Run the following commands on all nodes to install latest GlusterFS on CentOS 7. sudo yum -y install glusterfs-server Confirm installed package version. $ rpm -qi glusterfs-server Name : glusterfs-server Version : 6.5 Release : 2.el8 Architecture: x86_64 Install Date: Tue 29 Oct 2019 06:58:16 PM EAT Group : Unspecified Size : 6560178 License : GPLv2 or LGPLv3+ Signature : RSA/SHA256, Wed 28 Aug 2019 03:39:40 PM EAT, Key ID 43607f0dc2f8238c Source RPM : glusterfs-6.5-2.el8.src.rpm Build Date : Wed 28 Aug 2019 03:27:19 PM EAT Build Host : buildhw-09.phx2.fedoraproject.org Relocations : (not relocatable) Packager : Fedora Project Vendor : Fedora Project URL : http://docs.gluster.org/ Bug URL : https://bugz.fedoraproject.org/glusterfs Summary : Distributed file-system server You can also use the gluster command to check version. $ gluster --version glusterfs 6.5 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. GlusterFS comes with ABSOLUTELY NO WARRA
NTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. $ glusterfsd --version Step 4: Start GlusterFS Service on CentOS 8 / CentOS 7 After installation of GlusterFS Service on CentOS 8 / CentOS 7, start and enable the service. sudo systemctl enable --now glusterd.service Load all Kernel modules that will be required by Heketi. for i in dm_snapshot dm_mirror dm_thin_pool; do sudo modprobe $i done If you have an active firewalld service, allow ports used by GlusterFS. sudo firewall-cmd --add-service=glusterfs --permanent sudo firewall-cmd --reload Check service status on all nodes. $ systemctl status glusterd ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-10-29 19:10:08 EAT; 3min 1s ago Docs: man:glusterd(8) Main PID: 32027 (glusterd) Tasks: 9 (limit: 11512) Memory: 3.9M CGroup: /system.slice/glusterd.service └─32027 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO Oct 29 19:10:08 gluster01.novalocal systemd[1]: Starting GlusterFS, a clustered file-system server... Oct 29 19:10:08 gluster01.novalocal systemd[1]: Started GlusterFS, a clustered file-system server. $ systemctl status glusterd ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-10-29 19:10:13 EAT; 3min 51s ago Docs: man:glusterd(8) Main PID: 3706 (glusterd) Tasks: 9 (limit: 11512) Memory: 3.8M CGroup: /system.slice/glusterd.service └─3706 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO Oct 29 19:10:13 gluster02.novalocal systemd[1]: Starting GlusterFS, a clustered file-system server… Oct 29 19:10:13 gluster02.novalocal systemd[1]: Started GlusterFS, a clustered file-system server. $ systemctl status glusterd ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-10-29 19:10:15 EAT; 4min 24s ago Docs: man:glusterd(8) Main PID: 3716 (glusterd) Tasks: 9 (limit: 11512) Memory: 3.8M CGroup: /system.slice/glusterd.service └─3716 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO Oct 29 19:10:15 gluster03.novalocal systemd[1]: Starting GlusterFS, a clustered file-system server… Oct 29 19:10:15 gluster03.novalocal systemd[1]: Started GlusterFS, a clustered file-system server. Probe other nodes in the cluster [root@gluster01 ~]# gluster peer probe gluster02 peer probe: success. [root@gluster01 ~]# gluster peer probe gluster03 peer probe: success. [root@gluster01 ~]# gluster peer status Number of Peers: 2 Hostname: gluster02 Uuid: ebfdf84f-3d66-4f98-93df-a6442b5466ed State: Peer in Cluster (Connected) Hostname: gluster03 Uuid: 98547ab1-9565-4f71-928c-8e4e13eb61c3 State: Peer in Cluster (Connected) Step 5: Install Heketi on one of the nodes I’ll use gluster01 node to run Heketi service. Download the latest arhives of Heketi server and client from Github releases page. curl -s https://api.github.com/repos/heketi/heketi/releases/latest \ | grep browser_download_url \ | grep linux.amd64 \ | cut -d '"' -f 4 \ | wget -qi - Extract downloaded heketi archives. for i in `ls | grep heketi | grep .tar.gz`; do tar xvf $i; done Copy the heketi & heketi-cli binary packages. sudo cp heketi/heketi,heketi-cli /usr/local/bin Confirm they are available in your PATH $ heketi --version Heketi v10.4.0-release-10 (using go: go1.15
.14) $ heketi-cli --version heketi-cli v10.4.0-release-10 Step 5: Configure Heketi Server Add heketi system user. sudo groupadd --system heketi sudo useradd -s /sbin/nologin --system -g heketi heketi Create heketi configurations and data paths. sudo mkdir -p /var/lib/heketi /etc/heketi /var/log/heketi Copy heketi configuration file to /etc/heketi directory. sudo cp heketi/heketi.json /etc/heketi Edit the Heketi configuration file sudo vim /etc/heketi/heketi.json Set service port: "port": "8080" Set admin and use secrets. "_jwt": "Private keys for access", "jwt": "_admin": "Admin has access to all APIs", "admin": "key": "ivd7dfORN7QNeKVO" , "_user": "User only has access to /volumes endpoint", "user": "key": "gZPgdZ8NtBNj6jfp" , Configure glusterfs executor _sshexec_comment": "SSH username and private key file information", "sshexec": "keyfile": "/etc/heketi/heketi_key", "user": "root", "port": "22", "fstab": "/etc/fstab", ...... , If you use a user other than root, ensure it has passwordless sudo privilege escalation. Confirm database path is set properly "_db_comment": "Database file name", "db": "/var/lib/heketi/heketi.db", }, Below is my complete configuration file modified. "_port_comment": "Heketi Server Port Number", "port": "8080", "_enable_tls_comment": "Enable TLS in Heketi Server", "enable_tls": false, "_cert_file_comment": "Path to a valid certificate file", "cert_file": "", "_key_file_comment": "Path to a valid private key file", "key_file": "", "_use_auth": "Enable JWT authorization. Please enable for deployment", "use_auth": false, "_jwt": "Private keys for access", "jwt": "_admin": "Admin has access to all APIs", "admin": "key": "ivd7dfORN7QNeKVO" , "_user": "User only has access to /volumes endpoint", "user": "key": "gZPgdZ8NtBNj6jfp" , "_backup_db_to_kube_secret": "Backup the heketi database to a Kubernetes secret when running in Kubernetes. Default is off.", "backup_db_to_kube_secret": false, "_profiling": "Enable go/pprof profiling on the /debug/pprof endpoints.", "profiling": false, "_glusterfs_comment": "GlusterFS Configuration", "glusterfs": "_executor_comment": [ "Execute plugin. Possible choices: mock, ssh", "mock: This setting is used for testing and development.", " It will not send commands to any node.", "ssh: This setting will notify Heketi to ssh to the nodes.", " It will need the values in sshexec to be configured.", "kubernetes: Communicate with GlusterFS containers over", " Kubernetes exec api." ], "executor": "mock", "_sshexec_comment": "SSH username and private key file information", "sshexec": "keyfile": "/etc/heketi/heketi_key", "user": "cloud-user", "port": "22", "fstab": "/etc/fstab" , "_db_comment": "Database file name", "db": "/var/lib/heketi/heketi.db", "_refresh_time_monitor_gluster_nodes": "Refresh time in seconds to monitor Gluster nodes", "refresh_time_monitor_gluster_nodes": 120, "_start_time_monitor_gluster_nodes": "Start time in seconds to monitor Gluster nodes when the heketi comes up", "start_time_monitor_gluster_nodes": 10, "_loglevel_comment": [ "Set log level. Choices are:", " none, critical, error, warning, info, debug", "Default is warning" ], "loglevel" : "debug", "_auto_create_block_hosting_volume": "Creates Block Hosting volumes automatically if not found or exsisting volume exhausted", "auto_create_block_hosting_volume": true, "_block_hosting_volume_size": "New block hosting volume will be created in size mentioned, This is considered only if auto-create is enabled.", "block_hosting_volume_size":
500, "_block_hosting_volume_options": "New block hosting volume will be created with the following set of options. Removing the group gluster-block option is NOT recommended. Additional options can be added next to it separated by a comma.", "block_hosting_volume_options": "group gluster-block", "_pre_request_volume_options": "Volume options that will be applied for all volumes created. Can be overridden by volume options in volume create request.", "pre_request_volume_options": "", "_post_request_volume_options": "Volume options that will be applied for all volumes created. To be used to override volume options in volume create request.", "post_request_volume_options": "" Generate Heketi SSH keys sudo ssh-keygen -f /etc/heketi/heketi_key -t rsa -N '' sudo chown heketi:heketi /etc/heketi/heketi_key* Copy generated public key to all GlusterFS nodes for i in gluster01 gluster02 gluster03; do ssh-copy-id -i /etc/heketi/heketi_key.pub root@$i done Alternatively, you can cat the contents of /etc/heketi/heketi_key.pub and add to each server ~/.ssh/authorized_keys Confirm you can access the GlusterFS nodes with the Heketi private key: $ ssh -i /etc/heketi/heketi_key root@gluster02 The authenticity of host 'gluster02 (10.10.1.179)' can't be established. ECDSA key fingerprint is SHA256:GXNdsSxmp2O104rPB4RmYsH73nTa5U10cw3LG22sANc. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'gluster02,10.10.1.179' (ECDSA) to the list of known hosts. Activate the web console with: systemctl enable --now cockpit.socket Last login: Tue Oct 29 20:11:32 2019 from 10.10.1.168 [root@gluster02 ~]# Create Systemd Unit file Create Systemd unit file for Heketi $ sudo vim /etc/systemd/system/heketi.service [Unit] Description=Heketi Server [Service] Type=simple WorkingDirectory=/var/lib/heketi EnvironmentFile=-/etc/heketi/heketi.env User=heketi ExecStart=/usr/local/bin/heketi --config=/etc/heketi/heketi.json Restart=on-failure StandardOutput=syslog StandardError=syslog [Install] WantedBy=multi-user.target Also download sample environment file for Heketi. sudo wget -O /etc/heketi/heketi.env https://raw.githubusercontent.com/heketi/heketi/master/extras/systemd/heketi.env Set all directory permissions sudo chown -R heketi:heketi /var/lib/heketi /var/log/heketi /etc/heketi Start Heketi service Disable SELinux sudo setenforce 0 sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config Then reload Systemd and start Heketi service sudo systemctl daemon-reload sudo systemctl enable --now heketi Confirm the service is running. $ systemctl status heketi ● heketi.service - Heketi Server Loaded: loaded (/etc/systemd/system/heketi.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-10-29 20:29:23 EAT; 4s ago Main PID: 2166 (heketi) Tasks: 5 (limit: 11512) Memory: 8.7M CGroup: /system.slice/heketi.service └─2166 /usr/local/bin/heketi --config=/etc/heketi/heketi.json Oct 29 20:29:23 gluster01.novalocal heketi[2166]: Heketi v9.0.0 Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Loaded mock executor Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Volumes per cluster limit is set to default value of 1000 Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Block: Auto Create Block Hosting Volume set to true Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Block: New Block Hosting Volume size 500 GB Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Block: New Block Hosting Volume Options: group gluster-block Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 GlusterFS Application Loaded Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Started background pending operations cle
aner Oct 29 20:29:23 gluster01.novalocal heketi[2166]: [heketi] INFO 2019/10/29 20:29:23 Started Node Health Cache Monitor Oct 29 20:29:23 gluster01.novalocal heketi[2166]: Listening on port 8080 Step 6: Create Heketi Topology file I’ve created an ansible playbook to be used for generating and updating topology file. Editing json file manually can be stressing. This will make scaling easy. Install Ansible Locally – refer to Ansible Installation of Ansible documentation. For CentOS: sudo yum -y install epel-release sudo yum -y install ansible For Ubuntu: sudo apt update sudo apt install software-properties-common sudo apt-add-repository --yes --update ppa:ansible/ansible sudo apt install ansible Once Ansible is installed, create project folder structure mkdir -p ~/projects/ansible/roles/heketi/tasks,templates,defaults Create Heketi Topology Jinja2 template $ vim ~/projects/ansible/roles/heketi/templates/topology.json.j2 "clusters": [ "nodes": [ % if gluster_servers is defined and gluster_servers is iterable % % for item in gluster_servers % "node": "hostnames": "manage": [ " item.servername " ], "storage": [ " item.serverip " ] , "zone": item.zone , "devices": [ " join ("\",\"") " ] % if not loop.last %,% endif % % endfor % % endif % ] ] Define variables – Set values to match your environment setup. $ vim ~/projects/ansible/roles/heketi/defaults/main.yml --- # GlusterFS nodes gluster_servers: - servername: gluster01 serverip: 10.10.1.168 zone: 1 disks: - /dev/vdc - /dev/vdd - /dev/vde - servername: gluster02 serverip: 10.10.1.179 zone: 1 disks: - /dev/vdc - /dev/vdd - /dev/vde - servername: gluster03 serverip: 10.10.1.64 zone: 1 disks: - /dev/vdc - /dev/vdd - /dev/vde Create Ansible task $ vim ~/projects/ansible/roles/heketi/tasks/main.yml --- - name: Copy heketi topology file template: src: topology.json.j2 dest: /etc/heketi/topology.json - name: Set proper file ownership file: path: /etc/heketi/topology.json owner: heketi group: heketi Create playbook and inventory file $ vim ~/projects/ansible/heketi.yml --- - name: Generate Heketi topology file and copy to Heketi Server hosts: gluster01 become: yes become_method: sudo roles: - heketi $ vim ~/projects/ansible/hosts gluster01 This is how everything should looks like $ cd ~/projects/ansible/ $ tree . ├── heketi.yml ├── hosts └── roles └── heketi ├── defaults │   └── main.yml ├── tasks │   └── main.yml └── templates └── topology.json.j2 5 directories, 5 files Run playbook $ cd ~/projects/ansible $ ansible-playbook -i hosts --user myuser --ask-pass --ask-become-pass heketi.yml # Key based and Passwordless sudo / root, use: $ ansible-playbook -i hosts --user myuser heketi.yml Execution output Confirm the contents of generated Topology file. $ cat /etc/heketi/topology.json "clusters": [ "nodes": [ "node": "hostnames": "manage": [ "gluster01" ], "storage": [ "10.10.1.168" ] , "zone": 1 , "devices": [ "/dev/vdc","/dev/vdd","/dev/vde" ] , "node": "hostnames": "manage": [ "gluster02" ], "storage": [ "10.10.1.179" ] , "zone": 1 , "devices
": [ "/dev/vdc","/dev/vdd","/dev/vde" ] , "node": "hostnames": "manage": [ "gluster03" ], "storage": [ "10.10.1.64" ] , "zone": 1 , "devices": [ "/dev/vdc","/dev/vdd","/dev/vde" ] ] ] Step 7: Load Heketi Topology file If all looks good, load the topology file. # heketi-cli topology load --user admin --secret heketi_admin_secret --json=/etc/heketi/topology.json In my setup, I’ll run # heketi-cli topology load --user admin --secret ivd7dfORN7QNeKVO --json=/etc/heketi/topology.json Creating cluster ... ID: dda582cc3bd943421d57f4e78585a5a9 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node gluster01 ... ID: 0c349dcaec068d7a78334deaef5cbb9a Adding device /dev/vdc ... OK Adding device /dev/vdd ... OK Adding device /dev/vde ... OK Creating node gluster02 ... ID: 48d7274f325f3d59a3a6df80771d5aed Adding device /dev/vdc ... OK Adding device /dev/vdd ... OK Adding device /dev/vde ... OK Creating node gluster03 ... ID: 4d6a24b992d5fe53ed78011e0ab76ead Adding device /dev/vdc ... OK Adding device /dev/vdd ... OK Adding device /dev/vde ... OK Same output is shared in the screenshot below. Step 7: Confirm GlusterFS / Heketi Setup Add the Heketi access credentials to your ~/.bashrc file. $ vim ~/.bashrc export HEKETI_CLI_SERVER=http://heketiserverip:8080 export HEKETI_CLI_USER=admin export HEKETI_CLI_KEY="AdminPass" Source the file. source ~/.bashrc After loading topology file, run the command below to list your clusters. # heketi-cli cluster list Clusters: Id:dda582cc3bd943421d57f4e78585a5a9 [file][block] List nodes available in the Cluster: # heketi-cli node list Id:0c349dcaec068d7a78334deaef5cbb9a Cluster:dda582cc3bd943421d57f4e78585a5a9 Id:48d7274f325f3d59a3a6df80771d5aed Cluster:dda582cc3bd943421d57f4e78585a5a9 Id:4d6a24b992d5fe53ed78011e0ab76ead Cluster:dda582cc3bd943421d57f4e78585a5a9 Execute the following command to check the details of a particular node: # heketi-cli node info ID # heketi-cli node info 0c349dcaec068d7a78334deaef5cbb9a Node Id: 0c349dcaec068d7a78334deaef5cbb9a State: online Cluster Id: dda582cc3bd943421d57f4e78585a5a9 Zone: 1 Management Hostname: gluster01 Storage Hostname: 10.10.1.168 Devices: Id:0f26bd867f2bd8bc126ff3193b3611dc Name:/dev/vdd State:online Size (GiB):500 Used (GiB):0 Free (GiB):10 Bricks:0 Id:29c34e25bb30db68d70e5fd3afd795ec Name:/dev/vdc State:online Size (GiB):500 Used (GiB):0 Free (GiB):10 Bricks:0 Id:feb55e58d07421c422a088576b42e5ff Name:/dev/vde State:online Size (GiB):500 Used (GiB):0 Free (GiB):10 Bricks:0 Let’s now create a Gluster volume to verify Heketi & GlusterFS is working. # heketi-cli volume create --size=1 Name: vol_7e071706e1c22052e5121c29966c3803 Size: 1 Volume Id: 7e071706e1c22052e5121c29966c3803 Cluster Id: dda582cc3bd943421d57f4e78585a5a9 Mount: 10.10.1.168:vol_7e071706e1c22052e5121c29966c3803 Mount Options: backup-volfile-servers=10.10.1.179,10.10.1.64 Block: false Free Size: 0 Reserved Size: 0 Block Hosting Restriction: (none) Block Volumes: [] Durability Type: replicate Distribute Count: 1 Replica Count: 3 # heketi-cli volume list Id:7e071706e1c22052e5121c29966c3803 Cluster:dda582cc3bd943421d57f4e78585a5a9 Name:vol_7e071706e1c22052e5121c29966c3803 To view topology, run: heketi-cli topology info The gluster command can be also used to check servers in the cluster. gluster pool list For integration with Kubernetes, check: Configure Kubernetes Dynamic Volume Provisioning With Heketi & GlusterFS We now have a working GlusterFS and Heketi setup. Our next guides
will cover how we can configure Persistent Volumes Dynamic provisioning for Kubernetes and OpenShift by using Heketi and GlusterFS. Reference: GlusterFS Documentation Heketi Documentation
0 notes