#Rook storage in Kubernetes
Explore tagged Tumblr posts
virtualizationhowto · 1 year ago
Text
Top 5 Open Source Kubernetes Storage Solutions
Top 5 Open Source Kubernetes Storage Solutions #homelab #ceph #rook #glusterfs #longhorn #openebs #KubernetesStorageSolutions #OpenSourceStorageForKubernetes #CephRBDKubernetes #GlusterFSWithKubernetes #OpenEBSInKubernetes #RookStorage #LonghornKubernetes
Historically, Kubernetes storage has been challenging to configure, and it required specialized knowledge to get up and running. However, the landscape of K8s data storage has greatly evolved with many great options that are relatively easy to implement for data stored in Kubernetes clusters. Those who are running Kubernetes in the home lab as well will benefit from the free and open-source…
Tumblr media
View On WordPress
0 notes
raza102 · 9 months ago
Text
Ensuring Data Resilience: Top 10 Kubernetes Backup Solutions
In the dynamic landscape of container orchestration, Kubernetes has emerged as a leading platform for managing and deploying containerized applications. As organizations increasingly rely on Kubernetes for their containerized workloads, the need for robust data resilience strategies becomes paramount. One crucial aspect of ensuring data resilience in Kubernetes environments is implementing reliable backup solutions. In this article, we will explore the top 10 Kubernetes backup solutions that organizations can leverage to safeguard their critical data.
1. Velero
Velero, an open-source backup and restore tool, is designed specifically for Kubernetes clusters. It provides snapshot and restore capabilities, allowing users to create backups of their entire cluster or selected namespaces.
2. Kasten K10
Kasten K10 is a data management platform for Kubernetes that offers backup, disaster recovery, and mobility functionalities. It supports various cloud providers and on-premises deployments, ensuring flexibility for diverse Kubernetes environments.
3. Stash
Stash, another open-source project, focuses on backing up Kubernetes volumes and custom resources. It supports scheduled backups, retention policies, and encryption, providing a comprehensive solution for data protection.
4. TrilioVault
TrilioVault specializes in protecting stateful applications in Kubernetes environments. With features like incremental backups and point-in-time recovery, it ensures that organizations can recover their applications quickly and efficiently.
5. Ark
Heptio Ark, now part of VMware, offers a simple and robust solution for Kubernetes backup and recovery. It supports both on-premises and cloud-based storage, providing flexibility for diverse storage architectures.
6. KubeBackup
KubeBackup is a lightweight and easy-to-use backup solution that supports scheduled backups and incremental backups. It is designed to be simple yet effective in ensuring data resilience for Kubernetes applications.
7. Rook
Rook extends Kubernetes to provide a cloud-native storage orchestrator. While not a backup solution per se, it enables the creation of distributed storage systems that can be leveraged for reliable data storage and retrieval.
8. Backupify
Backupify focuses on protecting cloud-native applications, including those running on Kubernetes. It provides automated backups, encryption, and a user-friendly interface for managing backup and recovery processes.
9. StashAway
StashAway is an open-source project that offers both backup and restore capabilities for Kubernetes applications. It supports volume backups, making it a suitable choice for organizations with complex storage requirements.
10. Duplicity
Duplicity, though not Kubernetes-specific, is a versatile backup tool that can be integrated into Kubernetes environments. It supports encryption and incremental backups, providing an additional layer of data protection.
In conclusion, selecting the right Kubernetes backup solution is crucial for ensuring data resilience in containerized environments. The options mentioned here offer a range of features and capabilities, allowing organizations to choose the solution that best fits their specific needs. By incorporating these backup solutions into their Kubernetes strategy, organizations can mitigate risks and ensure the availability and integrity of their critical data.
0 notes
cloudlodge · 1 year ago
Text
Top 5 Open Source Kubernetes Storage Solutions - Virtualization Howto
0 notes
Text
this will be lengthy, and if you want EVEN more infos after that you can dm me, because i can infodump about this for ages
i have an HP ProDesk Mini with some intel 8th or 9th gen T chip and a HP EliteDesk with a Ryzen 2400GE (i think)
both have 8gb ram and a 256gb m.2 ssd. one of these two (i don't remember which) also has a 256gb sata ssd. i will add more SSD storage when I have more money lol. Only the SATA SSD gets used for actual application storage
i also have a Hetzer VPS running for like 7 euros a month so i have a public ip
i didn't build them myself, they were both around 150 euros on ebay each
I do own a rackmount server, but I will probably sell it soon because it is too loud. it is therefore not running anything rn and just sits powered off in my rack
I am using Kubernetes, which can also run docker containers (and only containers). My runtime is containerd (which is also used by Docker). Kubernetes is clustered, so all my nodes are working together and can move containers between each other as they please. they are also connected via an internal wireguard network, which enables the machines behind NAT to easily expose stuff via the public IP of my VPS.
My distro of choice is Talos Linux, it's a distro designed only for Kubernetes. It's very minimal, it doesn't even have SSH access, you administer everything via API. It also doesn't have tools like ls, cd or anything like that. I read in an article today that there are only 12 unique binaries in the entire system after install. The distro is immutable too. I mainly use it because it's very easy to configure and extremely reliable. I once configured a new cluster (in VMs though) in less than 5minutes
i don't have any dashboard that can make changes, i configure everything via kubectl from my local dev machine. Kubernetes has an API which gets used for configuring everything. I do use Grafana for monitoring. Here is a picture of my current grafana dashboard:
Tumblr media
since everything is done via yaml files, i am using neovim as my editor too
i currently don't run any software that is useful as a selfhosting thing, i just mostly finished configuring the infrastructure.
i have installed gatus for monitoring uptime today, but that's it for now
i want to install pihole or a pihole-similar thing soon
also headscale or another wireguard vpn server (preferrably one which supports SSO)
for my like infrastructure stack, i am running Rook/Ceph for storage, i have ArgoCD as my CI/CD and GitOps tool, i also use Istio as a service mesh for networking. my load balancer is MetalLB and Grafana, Prometheus, Alertmanager, Loki and Promtail are my logging, monitoring and alerting applications
(as you can see i can talk a LOT about this) (i will always answer further questions about this lol)
i mostly finished setting up my server setup now :3
send me asks for software to self-host and i will take a look at them!
18 notes · View notes
computingpostcom · 2 years ago
Text
The OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for cluster operations that require data persistence. As a developer you can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure. In this short guide you’ll learn how to expand an existing PVC in OpenShift when using OpenShift Container Storage. Before you can expand persistent volumes, the StorageClass must have the allowVolumeExpansion field set to true. Here is a list of Storage classes available in my OpenShift cluster. $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d localfile kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 169d ocs-storagecluster-cephfs (default) openshift-storage.cephfs.csi.ceph.com Delete Immediate true 169d openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 169d thin kubernetes.io/vsphere-volume Delete Immediate false 169d unused kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 190d I’ll change the default storage class which is ocs-storagecluster-cephfs. Let’s export the configuration to yaml file: oc get sc ocs-storagecluster-cephfs -o yaml >ocs-storagecluster-cephfs.yml I’ll modify the file to add allowVolumeExpansion field. allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" name: ocs-storagecluster-cephfs parameters: clusterID: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: true # Added field Delete current configured storageclass since a SC is immutable resource. $ oc delete sc ocs-storagecluster-cephfs storageclass.storage.k8s.io "ocs-storagecluster-cephfs" deleted Apply modified storage class configuration by running the following command: $ oc apply -f ocs-storagecluster-cephfs.yml storageclass.storage.k8s.io/ocs-storagecluster-cephfs created List storage classes to confirm it was indeed created. $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d localfile kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 169d ocs-storagecluster-cephfs (default) openshift-storage.cephfs.csi.ceph.com Delete Immediate true 5m20s openshift-storage.
noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 169d thin kubernetes.io/vsphere-volume Delete Immediate false 169d unused kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 190d Output yalm and confirm the new setting was applied. $ oc get sc ocs-storagecluster-cephfs -o yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | "allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":"annotations":"storageclass.kubernetes.io/is-default-class":"true","name":"ocs-storagecluster-cephfs","parameters":"clusterID":"openshift-storage","csi.storage.k8s.io/node-stage-secret-name":"rook-csi-cephfs-node","csi.storage.k8s.io/node-stage-secret-namespace":"openshift-storage","csi.storage.k8s.io/provisioner-secret-name":"rook-csi-cephfs-provisioner","csi.storage.k8s.io/provisioner-secret-namespace":"openshift-storage","fsName":"ocs-storagecluster-cephfilesystem","provisioner":"openshift-storage.cephfs.csi.ceph.com","reclaimPolicy":"Delete","volumeBindingMode":"Immediate" storageclass.kubernetes.io/is-default-class: "true" creationTimestamp: "2020-10-31T13:33:56Z" name: ocs-storagecluster-cephfs resourceVersion: "242503097" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-cephfs uid: 5aa95d3b-c39c-438d-85af-5c8550d6ed5b parameters: clusterID: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate How To Expand a PVC in OpenShift List available PVCs in the namespace. $ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-harbor-harbor-redis-0 Bound pvc-e516b793-60c5-431d-955f-b1d57bdb556b 1Gi RWO ocs-storagecluster-cephfs 169d database-data-harbor-harbor-database-0 Bound pvc-00a53065-9790-4291-8f00-288359c00f6c 2Gi RWO ocs-storagecluster-cephfs 169d harbor-harbor-chartmuseum Bound pvc-405c68de-eecd-4db1-9ca1-5ca97eeab37c 5Gi RWO ocs-storagecluster-cephfs 169d harbor-harbor-jobservice Bound pvc-e52f231e-0023-41ad-9aff-98ac53cecb44 2Gi RWO ocs-storagecluster-cephfs 169d harbor-harbor-registry Bound pvc-77e159d4-4059-47dd-9c61-16a6e8b37a14 100Gi RWX ocs-storagecluster-cephfs 39d Edit PVC and change capacity $ oc edit pvc data-harbor-harbor-redis-0 ... spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi Delete pod with claim. $ oc delete pods harbor-harbor-redis-0 pod "harbor-harbor-redis-0" deleted Recreate the deployment that was claiming the storage and it should utilize the new capacity. Expanding PVC on OpenShift Web Console You can also expand a PVC from the web console. Click on “Expand PVC” and set the desired PVC capacity.
0 notes
hackgit · 2 years ago
Text
[Media] ​​Rook
​​Rook Storage Orchestration for Kubernetes. Open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments. https://github.com/rook/rook
Tumblr media
0 notes
mmorellm · 4 years ago
Link
Data wrangling for Kubernetes-orchestrated containers is the new storage frontier. Pure Storage’s Portworx, Veeam’s Kasten, Commvault’s Hedvig Robin.io, StorageOS, SUSE’s Rancher, MayaData’s Mayastor, Rook and many  others are building products and services to mine this very rich, new seam
0 notes
un-enfant-immature · 6 years ago
Text
The Cloud Native Computing Foundation adds etcd to its open source stable
The Cloud Native Computing Foundation (CNCF), the open source home of projects like Kubernetes and Vitess, today announced that its technical committee has voted to bring a new project on board. That project is etcd, the distributed key-value store that was first developed by CoreOS (now owned by Red Hat, which in turn will soon be owned by IBM). Red Hat has now contributed this project to the CNCF.
Etcd, which is written in Go, is already a major component of many Kubernetes deployments, where it functions as a source of truth for coordinating clusters and managing the state of the system. Other open source projects that use etcd include Cloud Foundry and companies that use it in production include Alibaba, ING, Pinterest, Uber, The New York Times and Nordstrom.
“Kubernetes and many other projects like Cloud Foundry depend on etcd for reliable data storage. We’re excited to have etcd join CNCF as an incubation project and look forward to cultivating its community by improving its technical documentation, governance and more,” said Chris Aniszczyk, COO of CNCF, in today’s announcement. “Etcd is a fantastic addition to our community of projects.”
Today, etcd has well over 450 contributors and nine maintainers from eight different companies. The fact that it ended up at the CNCF is only logical, given that the foundation is also the host of Kubernetes. With this, the CNCF now plays host to 17 different projects that fall under its ‘incubated technologies’ umbrella. In addition to etcd, these include OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Jaeger, Notary, TUF, Vitess, NATS Helm, Rook and Harbor. Kubernetes, Prometheus and Envoy have already graduated from this incubation stage.
That’s a lot of projects for one foundation to manage, but the CNCF community is also extraordinarily large. This week alone, about 8,000 developers are converging on Seattle for KubeCon/CloudNativeCon, the organization’s biggest event yet, to talk all things containers. It surely helps that the CNCF has managed to bring competitors like AWS, Microsoft, Google, IBM and Oracle under a single roof to collaboratively work on building these new technologies. There is a risk of losing focus here, though, something that happened to the OpenStack project when it went through a similar growth and hype phase. It’ll be interesting to see how the CNCF will manage this as it brings on more projects (with Istio, the increasingly popular service mesh, being a likely candidate for coming over to the CNCF as well).
0 notes
virtualizationhowto · 1 year ago
Text
Kubernetes Persistent Volume Setup with Microk8s Rook and Ceph
Kubernetes Persistent Volume Setup with Microk8s Rook and Ceph @vexpert #vmwarecommunities #homelab #Kubernetespersistentvolumes #persistentvolumeclaims #storageclassesinKubernetes #accessmodes #kubernetes #kubernetesstorage #ceph #rook #blockstorage
Kubernetes persistent volume management is a cornerstone of modern container orchestration. Utilizing persistent storage can lead to more resilient and scalable applications. This guide delves into an experiment using Microk8s, Ceph, and Rook to create a robust storage solution for your Kubernetes cluster. Table of contentsWhat is a Kubernetes Persistent Volume?Understanding Persistent…
Tumblr media
View On WordPress
0 notes
tqvcancun · 4 years ago
Text
Webinar de Red Hat: «¡No dejes a tus contenedores sin almacenamiento!»
¿Asististe a los últimos webinars de Red Hat sobre OpenShift? Entonces estarás deseando conocer cuál y cuándo será el siguiente. Pues bien, prepárate porque tienes una cita con el gigante del código abierto y es esta misma semana: el próximo miércoles 24 de junio. El tema elegido por la compañía para continuar con sus ‘Vitaminas para la innovación’ es, además, realmente interesante y como todos los que han ofrecido hasta el momento, está enfocado por completo en el contenido práctico.
Desde el primer encuentro la serie Vitaminas para la innovación ha supuesto una excelente oportunidad para aquellos profesionales y empresas con el foco puesto en OpenShift, la plataforma empresarial de Kubernetes líder del mercado. Así, cada sesión ha ido in crescendo en los que a contenidos se refiere: de la introducción de la plataforma y la presentación de sus últimas novedades, al desarrollo de aplicaciones nativas para la nube y a la integración de los diferentes servicios orientados al DevOps.
«¡No dejes a tus contenedores sin almacenamiento!» es la continuación lógica de los dos últimos y al igual que lo fueron estos, se trata de una nueva sesión práctica, en este caso, enfocada en la más reciente versión de la solución de almacenamiento Red Hat OpenShift Container Storage (OCS) 4. Y es que el almacenamiento es un factor clave para garantizar el despliegue eficiente de cualquier aplicación, servicio o, en definitiva, de cualquier carga de trabajo que se tenga planeada.
«Aunque tus aplicaciones sean mayoritariamente stateless y estén diseñadas como microservicios en OpenShift, todavía sigues necesitando almacenamiento para la propia plataforma: registry, logging, prometheus. Además cada vez hay más cargas de trabajo stateful en OpenShift: bases de datos distribuidas, mensajería, pipelines CI/CD, analítica y ML, que requieren de un almacenamiento persistente que sea tan ágil, automatizado, escalable y cloud-native como el propio OpenShift», explican los expertos de Red Hat. La solución elegida para abordarlo es OpenShift Container Storage.
Como sucede con OpenShift en comparación con Kubernetes, OpenShift Container Storage es una implementación afinada y refinada por Red Hat, compuesta por varios de los proyectos más destacados del código abierto para el almacenamiento nativo en la nube, incluyendo: el sistema de archivos Ceph para almacenamiento definido por software; el operador de Kubernetes Rook, con el que automatizar y gestionar Ceph en un clúster de OpenShift; y Noobaa, con el que configurar automáticamente almacenamiento de objeto, buckets S3 multi-cloud.
«¡No dejes a tus contenedores sin almacenamiento!«, por lo tanto, se centrará en ofrecer un demo práctica de Red Hat OpenShift Container Storage 4 en la que el ponente detallará las tareas más habituales, como por ejemplo el despliegue de contenedores nativos en OpenShift, la provisión de forma automatizada en todos los tipos de almacenamiento, la configuración y aseguración de alta disponibilidad y redundancia en volúmenes persistentes o la gestión y monitorización a través de la propia consola de OpenShift.
En resumen, «¡No dejes a tus contenedores sin almacenamiento!» es otro webinar de Red Hat que no te puedes perder, porque una vez más los expertos de la compañía se ponen a tu disposición con información de primera mano. El ponente en esta ocasión será Luis Rico, Principal Storage Specialist Solution Architect de Red Hat EMEA, a quien tendrás la oportunidad de plantear cualquier duda que tengas en el proceso de inscripción para que te la conteste en riguroso directo.
Fuente: MuyLinux
0 notes
salvagenews · 5 years ago
Text
Rook 101: Building software-defined containerised storage in Kubernetes
from ComputerWeekly.com https://ift.tt/2PG8eOy via IFTTT
0 notes
pcstorenearme · 6 years ago
Text
Four short links: 11 January 2019
Four short links: 11 January 2019
Four short links
Rook — storage orchestration for Kubernetes.
Why We Can’t Have Nice Things (MIT Press) — Trolls’ actions are born of and fueled by culturally sanctioned impulses—which are just as damaging as the trolls’ most disruptive behaviors. […] For trolls, exploitation is a leisure activity; for media, it’s a business strategy. (via Greg J. Smith)
Language Bias in Accident Investigation —
View On WordPress
0 notes
computingpostcom · 2 years ago
Text
This article intends to cover in detail the installation and configuration of Rook, and how to integrate a highly available Ceph Storage Cluster to an existing kubernetes cluster. I’m performing this process on a recent deployment of Kubernetes in Rocky Linux 8 servers. But it can be used with any other Kubernetes Cluster deployed with Kubeadm or automation tools such as Kubespray and Rancher. In the initial days of Kubernetes, most applications deployed were Stateless meaning there was no need for data persistence. However, as Kubernetes becomes more popular, there was a concern around reliability when scheduling stateful services. Currently, you can use many types of storage volumes including vSphere Volumes, Ceph, AWS Elastic Block Store, Glusterfs, NFS, GCE Persistent Disk among many others. This gives us the comfort of running Stateful services that requires robust storage backend. What is Rook / Ceph? Rook is a free to use and powerful cloud-native open source storage orchestrator for Kubernetes. It provides support for a diverse set of storage solutions to natively integrate with cloud-native environments. More details about the storage solutions currently supported by Rook are captured in the project status section. Ceph is a distributed storage system that provides file, block and object storage and is deployed in large scale production clusters. Rook will enable us to automate deployment, bootstrapping, configuration, scaling and upgrading Ceph Cluster within a Kubernetes environment. Ceph is widely used in an In-House Infrastructure where managed Storage solution is rarely an option. Rook uses Kubernetes primitives to run and manage Software defined storage on Kubernetes. Key components of Rook Storage Orchestrator: Custom resource definitions (CRDs) – Used to create and customize storage clusters. The CRDs are implemented to Kubernetes during its deployment process. Rook Operator for Ceph – It automates the whole configuration of storage components and monitors the cluster to ensure it is healthy and available DaemonSet called rook-discover – It starts a pod running discovery agent on every nodes of your Kubernetes cluster to discover any raw disk devices / partitions that can be used as Ceph OSD disk. Monitoring – Rook enables Ceph Dashboard and provides metrics collectors/exporters and monitoring dashboards Features of Rook Rook enables you to provision block, file, and object storage with multiple storage providers Capability to efficiently distribute and replicate data to minimize potential loss Rook is designed to manage open-source storage technologies – NFS, Ceph, Cassandra Rook is an open source software released under the Apache 2.0 license With Rook you can hyper-scale or hyper-converge your storage clusters within Kubernetes environment Rook allows System Administrators to easily enable elastic storage in your datacenter By adopting rook as your storage orchestrator you are able to optimize workloads on commodity hardware Deploy Rook & Ceph Storage on Kubernetes Cluster These are the minimal setup requirements for the deployment of Rook and Ceph Storage on Kubernetes Cluster. A Cluster with minimum of three nodes Available raw disk devices (with no partitions or formatted filesystems) Or Raw partitions (without formatted filesystem) Or Persistent Volumes available from a storage class in block mode Step 1: Add Raw devices/partitions to nodes that will be used by Rook List all the nodes in your Kubernetes Cluster and decide which ones will be used in building Ceph Storage Cluster. I recommend you use worker nodes and not the control plane machines. [root@k8s-bastion ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster01.hirebestengineers.com Ready control-plane,master 28m v1.22.2 k8smaster02.hirebestengineers.com Ready control-plane,master 24m v1.22.2
k8smaster03.hirebestengineers.com Ready control-plane,master 23m v1.22.2 k8snode01.hirebestengineers.com Ready 22m v1.22.2 k8snode02.hirebestengineers.com Ready 21m v1.22.2 k8snode03.hirebestengineers.com Ready 21m v1.22.2 k8snode04.hirebestengineers.com Ready 21m v1.22.2 In my Lab environment, each of the worker nodes will have one raw device – /dev/vdb which we’ll add later. [root@k8s-worker-01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 40G 0 disk ├─vda1 253:1 0 1M 0 part ├─vda2 253:2 0 1G 0 part /boot ├─vda3 253:3 0 615M 0 part └─vda4 253:4 0 38.4G 0 part / [root@k8s-worker-01 ~]# free -h total used free shared buff/cache available Mem: 15Gi 209Mi 14Gi 8.0Mi 427Mi 14Gi Swap: 614Mi 0B 614Mi The following list of nodes will be used to build storage cluster. [root@kvm-private-lab ~]# virsh list | grep k8s-worker 31 k8s-worker-01-server running 36 k8s-worker-02-server running 38 k8s-worker-03-server running 41 k8s-worker-04-server running Add secondary storage to each node If using KVM hypervisor, start by listing storage pools: $ sudo virsh pool-list Name State Autostart ------------------------------ images active yes I’ll add a 40GB volume on the default storage pool. This can be done with a for loop: for domain in k8s-worker-01..4-server; do sudo virsh vol-create-as images $domain-disk-2.qcow2 40G done Command execution output: Vol k8s-worker-01-server-disk-2.qcow2 created Vol k8s-worker-02-server-disk-2.qcow2 created Vol k8s-worker-03-server-disk-2.qcow2 created Vol k8s-worker-04-server-disk-2.qcow2 created You can check image details including size using qemu-img command: $ qemu-img info /var/lib/libvirt/images/k8s-worker-01-server-disk-2.qcow2 image: /var/lib/libvirt/images/k8s-worker-01-server-disk-2.qcow2 file format: raw virtual size: 40 GiB (42949672960 bytes) disk size: 40 GiB To attach created volume(s) above to the Virtual Machine, run: for domain in k8s-worker-01..4-server; do sudo virsh attach-disk --domain $domain \ --source /var/lib/libvirt/images/$domain-disk-2.qcow2 \ --persistent --target vdb done --persistent: Make live change persistent --target vdb: Target of a disk device Confirm add is successful Disk attached successfully Disk attached successfully Disk attached successfully Disk attached successfully You can confirm that the volume was added to the vm as a block device /dev/vdb [root@k8s-worker-01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 40G 0 disk ├─vda1 253:1 0 1M 0 part ├─vda2 253:2 0 1G 0 part /boot ├─vda3 253:3 0 615M 0 part └─vda4 253:4 0 38.4G 0 part / vdb 253:16 0 40G 0 disk Step 2: Deploy Rook Storage Orchestrator Clone the rook project from Github using git command. This should be done on a machine with kubeconfig configured and confirmed to be working. You can also clone Rook’s specific branch as in release tag, for example: cd ~/ git clone --single-branch --branch release-1.8 https://github.com/rook/rook.git All nodes with available raw devices will be used for the Ceph cluster. As stated earlier, at least three nodes are required cd rook/deploy/examples/ Deploy the Rook Operator The first step when performing the deployment of deploy Rook operator is to use. Create required CRDs as specified in crds.yaml manifest: [root@k8s-bastion ceph]# kubectl create -f crds.yaml customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephclients.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephfilesystemmirrors.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectrealms.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectzonegroups.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectzones.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephrbdmirrors.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/objectbucketclaims.objectbucket.io created customresourcedefinition.apiextensions.k8s.io/objectbuckets.objectbucket.io created customresourcedefinition.apiextensions.k8s.io/volumereplicationclasses.replication.storage.openshift.io created customresourcedefinition.apiextensions.k8s.io/volumereplications.replication.storage.openshift.io created customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created Create common resources as in common.yaml file: [root@k8s-bastion ceph]# kubectl create -f common.yaml namespace/rook-ceph created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-object-bucket created serviceaccount/rook-ceph-admission-controller created clusterrole.rbac.authorization.k8s.io/rook-ceph-admission-controller-role created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-admission-controller-rolebinding created clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created clusterrole.rbac.authorization.k8s.io/rook-ceph-system created role.rbac.authorization.k8s.io/rook-ceph-system created clusterrole.rbac.authorization.k8s.io/rook-ceph-global created clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created clusterrole.rbac.authorization.k8s.io/rook-ceph-object-bucket created serviceaccount/rook-ceph-system created rolebinding.rbac.authorization.k8s.io/rook-ceph-system created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created serviceaccount/rook-ceph-osd created serviceaccount/rook-ceph-mgr created serviceaccount/rook-ceph-cmd-reporter created role.rbac.authorization.k8s.io/rook-ceph-osd created clusterrole.rbac.authorization.k8s.io/rook-ceph-osd created clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created role.rbac.authorization.k8s.io/rook-ceph-mgr created role.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-osd created rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ podsecuritypolicy.policy/00-rook-privileged created clusterrole.rbac.authorization.k8s.io/psp:rook created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-default-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-osd-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter-psp created serviceaccount/rook-csi-cephfs-plugin-sa created serviceaccount/rook-csi-cephfs-provisioner-sa created role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-plugin-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-provisioner-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created serviceaccount/rook-csi-rbd-plugin-sa created serviceaccount/rook-csi-rbd-provisioner-sa created role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-plugin-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-provisioner-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created role.rbac.authorization.k8s.io/rook-ceph-purge-osd created rolebinding.rbac.authorization.k8s.io/rook-ceph-purge-osd created serviceaccount/rook-ceph-purge-osd created Finally deploy Rook ceph operator from operator.yaml manifest file: [root@k8s-bastion ceph]# kubectl create -f operator.yaml configmap/rook-ceph-operator-config created deployment.apps/rook-ceph-operator created After few seconds Rook components should be up and running as seen below: [root@k8s-bastion ceph]# kubectl get all -n rook-ceph NAME READY STATUS RESTARTS AGE pod/rook-ceph-operator-9bf8b5959-nz6hd 1/1 Running 0 45s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/rook-ceph-operator 1/1 1 1 45s NAME DESIRED CURRENT READY AGE replicaset.apps/rook-ceph-operator-9bf8b5959 1 1 1 45s Verify the rook-ceph-operator is in the Running state before proceeding: [root@k8s-bastion ceph]# kubectl -n rook-ceph get pod NAME READY STATUS RESTARTS AGE rook-ceph-operator-76dc868c4b-zk2tj 1/1 Running 0 69s Step 3: Create a Ceph Storage Cluster on Kubernetes using Rook Now that we have prepared worker nodes by adding raw disk devices and deployed Rook operator, it is time to deploy the Ceph Storage Cluster. Let’s set default namespace to rook-ceph: # kubectl config set-context --current --namespace rook-ceph Context "kubernetes-admin@kubernetes" modified. Considering that Rook Ceph clusters can discover raw partitions by itself, it is okay to use the default cluster deployment manifest file without any modifications. [root@k8s-bastion ceph]# kubectl create -f cluster.yaml cephcluster.ceph.rook.io/rook-ceph created For any further customizations on Ceph Cluster check Ceph Cluster CRD documentation. When not using all the nodes you can expicitly define the nodes and raw devices to be used as seen in example below: storage: # cluster level storage configuration and selection useAllNodes: false useAllDevices: false nodes: - name: "k8snode01.hirebestengineers.com" devices: # specific devices to use for storage can be specified for each node - name: "sdb" - name: "k8snode03.hirebestengineers.com" devices: - name: "sdb" To view all resources created run the following command: kubectl get all -n rook-ceph Watching Pods creation in rook-ceph namespace: [root@k8s-bastion ceph]# kubectl get pods -n rook-ceph -w This is a list of Pods running in the namespace after a successful deployment: [root@k8s-bastion ceph]# kubectl get pods -n rook-ceph NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-8vrgj 3/3 Running 0 5m39s csi-cephfsplugin-9csbp 3/3 Running 0 5m39s csi-cephfsplugin-lh42b 3/3 Running 0 5m39s csi-cephfsplugin-provisioner-b54db7d9b-kh89q 6/6 Running 0 5m39s csi-cephfsplugin-provisioner-b54db7d9b-l92gm 6/6 Running 0 5m39s csi-cephfsplugin-xc8tk 3/3 Running 0 5m39s csi-rbdplugin-28th4 3/3 Running 0 5m41s csi-rbdplugin-76bhw 3/3 Running 0 5m41s csi-rbdplugin-7ll7w 3/3 Running 0 5m41s csi-rbdplugin-provisioner-5845579d68-5rt4x 6/6 Running 0 5m40s csi-rbdplugin-provisioner-5845579d68-p6m7r 6/6 Running 0 5m40s csi-rbdplugin-tjlsk 3/3 Running 0 5m41s rook-ceph-crashcollector-k8snode01.hirebestengineers.com-7ll2x6 1/1 Running 0 3m3s rook-ceph-crashcollector-k8snode02.hirebestengineers.com-8ghnq9 1/1 Running 0 2m40s rook-ceph-crashcollector-k8snode03.hirebestengineers.com-7t88qp 1/1 Running 0 3m14s rook-ceph-crashcollector-k8snode04.hirebestengineers.com-62n95v 1/1 Running 0 3m14s rook-ceph-mgr-a-7cf9865b64-nbcxs 1/1 Running 0 3m17s rook-ceph-mon-a-555c899765-84t2n 1/1 Running 0 5m47s rook-ceph-mon-b-6bbd666b56-lj44v 1/1 Running 0 4m2s rook-ceph-mon-c-854c6d56-dpzgc 1/1 Running 0 3m28s rook-ceph-operator-9bf8b5959-nz6hd 1/1 Running 0 13m rook-ceph-osd-0-5b7875db98-t5mdv 1/1 Running 0 3m6s rook-ceph-osd-1-677c4cd89-b5rq2 1/1 Running 0 3m5s rook-ceph-osd-2-6665bc998f-9ck2f 1/1 Running 0 3m3s rook-ceph-osd-3-75d7b47647-7vfm4 1/1 Running 0 2m40s rook-ceph-osd-prepare-k8snode01.hirebestengineers.com--1-6kbkn 0/1 Completed 0 3m14s rook-ceph-osd-prepare-k8snode02.hirebestengineers.com--1-5hz49 0/1 Completed 0 3m14s rook-ceph-osd-prepare-k8snode03.hirebestengineers.com--1-4b45z 0/1 Completed 0 3m14s rook-ceph-osd-prepare-k8snode04.hirebestengineers.com--1-4q8cs 0/1 Completed 0 3m14s Each worker node will have a Job to add OSDs into Ceph Cluster: [root@k8s-bastion ceph]# kubectl get -n rook-ceph jobs.batch NAME COMPLETIONS DURATION AGE rook-ceph-osd-prepare-k8snode01.hirebestengineers.com 1/1 11s 3m46s rook-ceph-osd-prepare-k8snode02.hirebestengineers.com 1/1 34s 3m46s rook-ceph-osd-prepare-k8snode03.hirebestengineers.com 1/1 10s 3m46s rook-ceph-osd-prepare-k8snode04.hirebestengineers.com 1/1 9s 3m46s [root@k8s-bastion ceph]# kubectl describe jobs.batch rook-ceph-osd-prepare-k8snode01.hirebestengineers.com Verify that the cluster CR has been created and active: [root@k8s-bastion ceph]# kubectl -n rook-ceph get cephcluster NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL rook-ceph /var/lib/rook 3 3m50s Ready Cluster created successfully HEALTH_OK
Step 4: Deploy Rook Ceph toolbox in Kubernetes TheRook Ceph toolbox is a container with common tools used for rook debugging and testing. The toolbox is based on CentOS and any additional tools can be easily installed via yum. We will start a toolbox pod in an Interactive mode for us to connect and execute Ceph commands from a shell. Change to ceph directory: cd ~/ cd rook/deploy/examples Apply the toolbox.yaml manifest file to create toolbox pod: [root@k8s-bastion ceph]# kubectl apply -f toolbox.yaml deployment.apps/rook-ceph-tools created Connect to the pod using kubectl command with exec option: [root@k8s-bastion ~]# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash [root@rook-ceph-tools-96c99fbf-qb9cj /]# Check Ceph Storage Cluster Status. Be keen on the value of cluster.health, it should beHEALTH_OK. [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph status cluster: id: 470b7cde-7355-4550-bdd2-0b79d736b8ac health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 5m) mgr: a(active, since 4m) osd: 4 osds: 4 up (since 4m), 4 in (since 5m) data: pools: 1 pools, 128 pgs objects: 0 objects, 0 B usage: 25 MiB used, 160 GiB / 160 GiB avail pgs: 128 active+clean List all OSDs to check their current status. They should exist and be up. [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0 k8snode04.hirebestengineers.com 6776k 39.9G 0 0 0 0 exists,up 1 k8snode03.hirebestengineers.com 6264k 39.9G 0 0 0 0 exists,up 2 k8snode01.hirebestengineers.com 6836k 39.9G 0 0 0 0 exists,up 3 k8snode02.hirebestengineers.com 6708k 39.9G 0 0 0 0 exists,up Check raw storage and pools: [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 160 GiB 160 GiB 271 MiB 271 MiB 0.17 TOTAL 160 GiB 160 GiB 271 MiB 271 MiB 0.17 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 32 0 B 0 0 B 0 51 GiB replicapool 3 32 35 B 8 24 KiB 0 51 GiB k8fs-metadata 8 128 91 KiB 24 372 KiB 0 51 GiB k8fs-data0 9 32 0 B 0 0 B 0 51 GiB [root@rook-ceph-tools-96c99fbf-qb9cj /]# rados df POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR device_health_metrics 0 B 0 0 0 0 0 0 0 0 B 0 0 B 0 B 0 B k8fs-data0 0 B 0 0 0 0 0 0 1 1 KiB 2 1 KiB 0 B 0 B k8fs-metadata 372 KiB 24 0 72 0 0 0 351347 172 MiB 17 26 KiB 0 B 0 B replicapool 24 KiB 8 0 24 0 0 0 999 6.9 MiB 1270 167 MiB 0 B 0 B total_objects 32 total_used 271 MiB total_avail 160 GiB total_space 160 GiB Step 5: Working with Ceph Cluster Storage Modes You have three types of storage exposed by Rook: Shared Filesystem: Create a filesystem to be shared across multiple pods (RWX) Block: Create block storage to be consumed by a pod (RWO) Object: Create an object store that is accessible inside or outside the Kubernetes cluster All the necessary files for either storage mode are available in rook/cluster/examples/kubernetes/ceph/ directory. cd ~/ cd rook/deploy/examples 1. Cephfs Cephfs is used to enable shared filesystem which can be mounted with read/write permission from multiple pods.
Update the filesystem.yaml file by setting data pool name, replication size e.t.c. [root@k8s-bastion ceph]# vim filesystem.yaml apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: k8sfs namespace: rook-ceph # namespace:cluster Once done with modifications let Rook operator create all the pools and other resources necessary to start the service: [root@k8s-bastion ceph]# kubectl create -f filesystem.yaml cephfilesystem.ceph.rook.io/k8sfs created Access Rook toolbox pod and check if metadata and data pools are created. [root@k8s-bastion ceph]# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph fs ls name: k8sfs, metadata pool: k8sfs-metadata, data pools: [k8sfs-data0 ] [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph osd lspools 1 device_health_metrics 3 replicapool 8 k8fs-metadata 9 k8fs-data0 [root@rook-ceph-tools-96c99fbf-qb9cj /]# exit Update the fsName and pool name in Cephfs Storageclass configuration file: $ vim csi/cephfs/storageclass.yaml parameters: clusterID: rook-ceph # namespace:cluster fsName: k8sfs pool: k8fs-data0 Create StorageClass using the command: [root@k8s-bastion csi]# kubectl create -f csi/cephfs/storageclass.yaml storageclass.storage.k8s.io/rook-cephfs created List available storage classes in your Kubernetes Cluster: [root@k8s-bastion csi]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 97s Create test PVC and Pod to test usage of Persistent Volume. [root@k8s-bastion csi]# kubectl create -f csi/cephfs/pvc.yaml persistentvolumeclaim/cephfs-pvc created [root@k8s-bastion ceph]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-fd024cc0-dcc3-4a1d-978b-a166a2f65cdb 1Gi RWO rook-cephfs 4m42s [root@k8s-bastion csi]# kubectl create -f csi/cephfs/pod.yaml pod/csicephfs-demo-pod created PVC creation manifest file contents: --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: rook-cephfs Checking PV creation logs as captured by the provisioner pod: [root@k8s-bastion csi]# kubectl logs deploy/csi-cephfsplugin-provisioner -f -c csi-provisioner [root@k8s-bastion ceph]# kubectl get pods | grep csi-cephfsplugin-provision csi-cephfsplugin-provisioner-b54db7d9b-5dpt6 6/6 Running 0 4m30s csi-cephfsplugin-provisioner-b54db7d9b-wrbxh 6/6 Running 0 4m30s If you made an update and provisioner didn’t pick you can always restart the Cephfs Provisioner Pods: # Gracefully $ kubectl delete pod -l app=csi-cephfsplugin-provisioner # Forcefully $ kubectl delete pod -l app=csi-cephfsplugin-provisioner --grace-period=0 --force 2. RBD Block storage allows a single pod to mount storage (RWO mode). Before Rook can provision storage, a StorageClass and CephBlockPool need to be created [root@k8s-bastion ~]# cd [root@k8s-bastion ~]# cd rook/deploy/examples [root@k8s-bastion csi]# kubectl create -f csi/rbd/storageclass.yaml cephblockpool.ceph.rook.io/replicapool created storageclass.storage.k8s.io/rook-ceph-block created [root@k8s-bastion csi]# kubectl create -f csi/rbd/pvc.yaml persistentvolumeclaim/rbd-pvc created List StorageClasses and PVCs: [root@k8s-bastion csi]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 49s rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 6h17m
[root@k8s-bastion csi]# kubectl get pvc rbd-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rbd-pvc Bound pvc-c093e6f7-bb4e-48df-84a7-5fa99fe81138 1Gi RWO rook-ceph-block 43s Deploying multiple apps We will create a sample application to consume the block storage provisioned by Rook with the classic wordpress and mysql apps. Both of these apps will make use of block volumes provisioned by Rook. [root@k8s-bastion ~]# cd [root@k8s-bastion ~]# cd rook/deploy/examples [root@k8s-bastion kubernetes]# kubectl create -f mysql.yaml service/wordpress-mysql created persistentvolumeclaim/mysql-pv-claim created deployment.apps/wordpress-mysql created [root@k8s-bastion kubernetes]# kubectl create -f wordpress.yaml service/wordpress created persistentvolumeclaim/wp-pv-claim created deployment.apps/wordpress created Both of these apps create a block volume and mount it to their respective pod. You can see the Kubernetes volume claims by running the following: [root@k8smaster01 kubernetes]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-aa972f9d-ab53-45f6-84c1-35a192339d2e 1Gi RWO rook-cephfs 2m59s mysql-pv-claim Bound pvc-4f1e541a-1d7c-49b3-93ef-f50e74145057 20Gi RWO rook-ceph-block 10s rbd-pvc Bound pvc-68e680c1-762e-4435-bbfe-964a4057094a 1Gi RWO rook-ceph-block 47s wp-pv-claim Bound pvc-fe2239a5-26c0-4ebc-be50-79dc8e33dc6b 20Gi RWO rook-ceph-block 5s Check deployment of MySQL and WordPress Services: [root@k8s-bastion kubernetes]# kubectl get deploy wordpress wordpress-mysql NAME READY UP-TO-DATE AVAILABLE AGE wordpress 1/1 1 1 2m46s wordpress-mysql 1/1 1 1 3m8s [root@k8s-bastion kubernetes]# kubectl get svc wordpress wordpress-mysql NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress LoadBalancer 10.98.120.112 80:32046/TCP 3m39s wordpress-mysql ClusterIP None 3306/TCP 4m1s Retrieve WordPress NodePort and test URL using LB IP address and the port. NodePort=$(kubectl get service wordpress -o jsonpath='.spec.ports[0].nodePort') echo $NodePort Cleanup Storage test PVC and pods [root@k8s-bastion kubernetes]# kubectl delete -f mysql.yaml service "wordpress-mysql" deleted persistentvolumeclaim "mysql-pv-claim" deleted deployment.apps "wordpress-mysql" deleted [root@k8s-bastion kubernetes]# kubectl delete -f wordpress.yaml service "wordpress" deleted persistentvolumeclaim "wp-pv-claim" deleted deployment.apps "wordpress" deleted # Cephfs cleanup [root@k8s-bastion kubernetes]# kubectl delete -f ceph/csi/cephfs/pod.yaml [root@k8s-bastion kubernetes]# kubectl delete -f ceph/csi/cephfs/pvc.yaml # RBD Cleanup [root@k8s-bastion kubernetes]# kubectl delete -f ceph/csi/rbd/pod.yaml [root@k8s-bastion kubernetes]# kubectl delete -f ceph/csi/rbd/pvc.yaml Step 6: Accessing Ceph Dashboard The Ceph dashboard gives you an overview of the status of your Ceph cluster: The overall health The status of the mon quorum The sstatus of the mgr, and osds Status of other Ceph daemons View pools and PG status Logs for the daemons, and much more. List services in rook-ceph namespace: [root@k8s-bastion ceph]# kubectl get svc -n rook-ceph NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE csi-cephfsplugin-metrics ClusterIP 10.105.10.255 8080/TCP,8081/TCP 9m56s csi-rbdplugin-metrics ClusterIP 10.96.5.0 8080/TCP,8081/TCP 9m57s rook-ceph-mgr ClusterIP 10.103.171.189 9283/TCP 7m31s rook-ceph-mgr-dashboard ClusterIP 10.102.140.148 8443/TCP 7m31s
rook-ceph-mon-a ClusterIP 10.102.120.254 6789/TCP,3300/TCP 10m rook-ceph-mon-b ClusterIP 10.97.249.82 6789/TCP,3300/TCP 8m19s rook-ceph-mon-c ClusterIP 10.99.131.50 6789/TCP,3300/TCP 7m46s From the output we can confirm port 8443 was configured. Use port forwarding to access the dashboard: $ kubectl port-forward service/rook-ceph-mgr-dashboard 8443:8443 -n rook-ceph Forwarding from 127.0.0.1:8443 -> 8443 Forwarding from [::1]:8443 -> 8443 Now, should be accessible over https://locallhost:8443 Login username is admin and password can be extracted using the following command: kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="['data']['password']" | base64 --decode && echo Access Dashboard with Node Port To create a service with the NodePort, save this yaml as dashboard-external-https.yaml. # cd # vim dashboard-external-https.yaml apiVersion: v1 kind: Service metadata: name: rook-ceph-mgr-dashboard-external-https namespace: rook-ceph labels: app: rook-ceph-mgr rook_cluster: rook-ceph spec: ports: - name: dashboard port: 8443 protocol: TCP targetPort: 8443 selector: app: rook-ceph-mgr rook_cluster: rook-ceph sessionAffinity: None type: NodePort Create a service that listens on Node Port: [root@k8s-bastion ~]# kubectl create -f dashboard-external-https.yaml service/rook-ceph-mgr-dashboard-external-https created Check new service created: [root@k8s-bastion ~]# kubectl -n rook-ceph get service rook-ceph-mgr-dashboard-external-https NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mgr-dashboard-external-https NodePort 10.103.91.41 8443:32573/TCP 2m43s In this example, port 32573 will be opened to expose port 8443 from the ceph-mgr pod. Now you can enter the URL in your browser such as https://[clusternodeip]:32573 and the dashboard will appear. Login with admin username and password decoded from rook-ceph-dashboard-password secret. kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="['data']['password']" | base64 --decode && echo Ceph dashboard view: Hosts list: Bonus: Tearing Down the Ceph Cluster If you want to tear down the cluster and bring up a new one, be aware of the following resources that will need to be cleaned up: rook-ceph namespace: The Rook operator and cluster created by operator.yaml and cluster.yaml (the cluster CRD) /var/lib/rook: Path on each host in the cluster where configuration is cached by the ceph mons and osds All CRDs in the cluster. [root@k8s-bastion ~]# kubectl get crds NAME CREATED AT apiservers.operator.tigera.io 2021-09-24T18:09:12Z bgpconfigurations.crd.projectcalico.org 2021-09-24T18:09:12Z bgppeers.crd.projectcalico.org 2021-09-24T18:09:12Z blockaffinities.crd.projectcalico.org 2021-09-24T18:09:12Z cephclusters.ceph.rook.io 2021-09-30T20:32:10Z clusterinformations.crd.projectcalico.org 2021-09-24T18:09:12Z felixconfigurations.crd.projectcalico.org 2021-09-24T18:09:12Z globalnetworkpolicies.crd.projectcalico.org 2021-09-24T18:09:12Z globalnetworksets.crd.projectcalico.org 2021-09-24T18:09:12Z hostendpoints.crd.projectcalico.org 2021-09-24T18:09:12Z imagesets.operator.tigera.io 2021-09-24T18:09:12Z installations.operator.tigera.io 2021-09-24T18:09:12Z ipamblocks.crd.projectcalico.org 2021-09-24T18:09:12Z ipamconfigs.crd.projectcalico.org 2021-09-24T18:09:12Z ipamhandles.crd.projectcalico.org 2021-09-24T18:09:12Z ippools.crd.projectcalico.org 2021-09-24T18:09:12Z
kubecontrollersconfigurations.crd.projectcalico.org 2021-09-24T18:09:12Z networkpolicies.crd.projectcalico.org 2021-09-24T18:09:12Z networksets.crd.projectcalico.org 2021-09-24T18:09:12Z tigerastatuses.operator.tigera.io 2021-09-24T18:09:12Z Edit the CephCluster and add the cleanupPolicy kubectl -n rook-ceph patch cephcluster rook-ceph --type merge -p '"spec":"cleanupPolicy":"confirmation":"yes-really-destroy-data"' Delete block storage and file storage: cd ~/ cd rook/deploy/examples kubectl delete -n rook-ceph cephblockpool replicapool kubectl delete -f csi/rbd/storageclass.yaml kubectl delete -f filesystem.yaml kubectl delete -f csi/cephfs/storageclass.yaml Delete the CephCluster Custom Resource: [root@k8s-bastion ~]# kubectl -n rook-ceph delete cephcluster rook-ceph cephcluster.ceph.rook.io "rook-ceph" deleted Verify that the cluster CR has been deleted before continuing to the next step. kubectl -n rook-ceph get cephcluster Delete the Operator and related Resources kubectl delete -f operator.yaml kubectl delete -f common.yaml kubectl delete -f crds.yaml Zapping Devices # Set the raw disk / raw partition path DISK="/dev/vdb" # Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean) # Install: yum install gdisk -y Or apt install gdisk sgdisk --zap-all $DISK # Clean hdds with dd dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync # Clean disks such as ssd with blkdiscard instead of dd blkdiscard $DISK # These steps only have to be run once on each node # If rook sets up osds using ceph-volume, teardown leaves some devices mapped that lock the disks. ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove % # ceph-volume setup can leave ceph- directories in /dev and /dev/mapper (unnecessary clutter) rm -rf /dev/ceph-* rm -rf /dev/mapper/ceph--* # Inform the OS of partition table changes partprobe $DISK Removing the Cluster CRD Finalizer: for CRD in $(kubectl get crd -n rook-ceph | awk '/ceph.rook.io/ print $1'); do kubectl get -n rook-ceph "$CRD" -o name | \ xargs -I kubectl patch -n rook-ceph --type merge -p '"metadata":"finalizers": [null]' done If the namespace is still stuck in Terminating state as seen in the command below: $ kubectl get ns rook-ceph NAME STATUS AGE rook-ceph Terminating 23h You can check which resources are holding up the deletion and remove the finalizers and delete those resources. kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n rook-ceph From my output the resource is configmap named rook-ceph-mon-endpoints: NAME DATA AGE configmap/rook-ceph-mon-endpoints 4 23h Delete the resource manually: # kubectl delete configmap/rook-ceph-mon-endpoints -n rook-ceph configmap "rook-ceph-mon-endpoints" deleted Recommended reading: Rook Best Practices for Running Ceph on Kubernetes
0 notes
homedevises · 6 years ago
Text
Why You Should Not Go To Google Program For Architecture | google program for architecture
Lamont Doherty Earth Observatory of Columbia University, area I work, is allegedly archetypal of abounding universities and assay labs. Admitting actuality at an Ivy League school, budgets are actual bound (at atomic compared with the tech / startup world). Money comes in in chunks back our proposals to agencies such as the National Science Foundation are funded. We use this money to buy workstations and servers, which alive beneath our desks and in our closets. We backpack these servers abounding of adamantine drives in adjustment to abundance as abundant abstracts as accessible for as little money as possible. We afresh download files from datasets such as CMIP5 and use the workstations to agitate through the data.
PDF Architect 24 Professional review: This PDF editor is ready for … – google program for architecture | google program for architecture
At Lamont, we accomplish some attempts to alike these servers and workstations into a network. This is mostly done by cross-mounting the filesystems from one apparatus to the others appliance NFS. This allows the datasets to be accumulated amid altered users. In some cases, the Ingrid software is acclimated to serve netCDF-style datasets appliance the opendap protocol, accouterment addition way for abstracts to be exchanged amid altered computers in the network.
This basement has served us well, accouterment the foundations for abounding important accurate discoveries and hundreds of appear papers. But it’s starting to appearance its age. As CMIP6 starts to appear online, we accept to ask whether this archetypal is the best band-aid activity forward. Some of the downsides are:
In adjustment to accomplish added flexibility, it would be nice to be able to abstracted out the accumulator of datasets from the accretion / analysis. Best importantly, it would be abundant to be able to add either accretion or accumulator whenever needs appear / money appears. The band-aid to both problems is to borrow approaches from billow accretion and administer them to locally endemic and maintained hardware: an “on-premises cloud”. We abode compute and accumulator needs separately.
Jupyter is a technology that allows bodies to do alternate abstracts assay in a a web browser appliance dozens of altered programming languages. (Jupyter stands for Julia, Python, R, the three best broadly acclimated accessible antecedent languages for abstracts science.) Added and added scientists assume appetite to do their abstracts assay central a Jupyter ambiance (Notebook or Lab). This article in Nature provides a abundant overview of some of the affidavit why:
This is abundant account in agreement of IT, because Jupyter has already developed a ton of abundant basement (like JupyterHub) for confined up notebooks to users. If best bodies in the alignment appetite to use Jupyter for their computing, a abundant band-aid is to set up a accumulated server/cluster active JupyterHub. For a baby accumulation (< 10 people), this could be as simple as one big server for everyone.
Google Talks Chrome | Random Walks – google program for architecture | google program for architecture
Larger organizations will charge assorted servers, and these will charge to be accommodating somehow (i.e. a cluster). If bodies are alive with aerial resolution datasets or accomplishing apparatus learning, they may additionally appetite lots of CPU power, RAM, GPUs, and ability appetite to barrage Spark or Dask clusters. This will booty added resources. However, if you’re annihilation like us, this accretion will be actual “bursty” — happening in quick bursts of intensity — since it is allotment of an alternate action in which the scientist makes a calculation, thinks about it, and afresh possibly tries article else, over and over afresh all day.
This book is abstraction for the bartering cloud, because you can ephemerally accouterment any cardinal of compute nodes for abbreviate tasks and pay by the minute. This is accessible because of the economics of scale…at any accustomed instant, there are bags of nodes spinning up and bottomward in Amazon’s and Google’s datacenters. In architecture our own data-analysis cluster, we appetite to try to carbon some of these economics of calibration by accumulation over as abounding accessible users aural the organization. However, we don’t appetite to become as bedeviled as HPC centers about array utilization. For a data-analysis cluster, no one wants to delay in the queue. There has to consistently be at atomic one anthology accessible to every scientist in the organization. This agency some machines will sit abandoned at times. However, the achievement is that, in aggregate, the accumulated compute ability will still be added efficient — for the environment, for the budget, and for the assay process — than the cachet quo of claimed workstations.
So how accurately do we actualize this accumulated accretion environment? It would be abundant to be able to use Kubernetes, a software apparatus for managing clusters, in our abstracts center. This is the technology that makes Jupyterhub run calmly in the cloud, via the “Zero to Jupyterhub” project:
Using Kubernetes will save a ton of time (and accordingly money) for the bodies who accept to set up and run this thing, and it will acquiesce IT agents alien with billow tech to alpha to apprentice important new skills. Kubernetes will allegedly run on top of some array of OpenStack cluster. OpenStack is an open-source technology for ambience up these types of things which is broadly acclimated in science, at places like CERN and the Square Kilometer Array.
All this will run on top of cool all-encompassing article server hardware, which Dell, RedHat, or any cardinal of abate vendors would be blessed to advertise to us.
rex architecture st louis – Google Search | Wanda Massing … – google program for architecture | google program for architecture
Depending on the age mix of the scientists, there may be lots of bodies who don’t appetite to be accountable to use a Jupyter Lab interface (despite its adaptability and abutment for about 100 programming languages). Added acceptable accumulated linux servers, active old-school applications like Matlab and IDL, can be added to the array to board these people.
Other bequest science applications, like databases or web servers, could additionally calmly be deployed aural a kubernetes array and managed with the aforementioned set of tools. If the alignment needs big-data processing being like Hadoop or Spark, that could be added too. (In my experience, these are not boundless in academia alfresco of HPC environments.)
Ceph is an accessible antecedent technology which allows abounding alone accumulator servers to assignment calm to accommodate a distinct “virtual” accumulator accessory for all the computers in the network. The primary advantage of a technology like Ceph is that it decouples the basal accouterments (servers, adamantine drives, etc.) from the endpoints that users and applications collaborate with. Servers can be replaced or added to a Ceph array indefinitely while advancement the candor of the data. By appliance avant-garde algorithms to alike the altered nodes in the cluster, Ceph can accommodate acutely scalable aerial achievement in agreement of bandwidth and latency. Abstracts is replicated beyond assorted servers, and the array is airy to drive failure.
Ceph provides altered types of storage. The best advantageous for climate-type abstracts assay are:
A abundant affair about Ceph is that you don’t accept to accept amid these two. You can abutment what bodies are accustomed with (files) while managing the alteration to new technologies based on article storage.
Singapore nightscape – google program for architecture | google program for architecture
Ceph could run on appropriate accumulator nodes (basically aloof servers with lots of adamantine drives) aural the aforementioned all-embracing Kubernetes / OpenStack cluster, accouterment the accumulated filesystems and article accumulator for the compute nodes to access. Accoutrement like Rook abide to advice arrange Ceph on Kubernetes clusters. This is how CERN allegedly does it:
The aforementioned vendors who could advertise us an OpenStack array would allegedly be animated to add Ceph to the package.
This basal foundation (Kubernetes Ceph OpenStack) can accommodate the basement to abutment a advanced ambit of altered types of accurate assay organization. For a abode like Lamont, we would appetite to set up specific applications like Ingrid. We would additionally accept to ahead anxiously about what accumulated datasets to abundance and how to abundance them. Ceph, with its article abundance capability, would be a absolute abode to apply Zarr to abundance netCDF-style altitude datasets in a awful able and scalable way, as we are already accomplishing today with Pangeo on the bartering cloud.
I don’t apperceive absolutely how abundant this would cost. To get a price, one would accept to accomplish a appeal for proposals. This certificate could anatomy the base for such an RFP. To accumulate costs low, it is important to accent that no appropriate accouterments or big-ticket proprietary software is appropriate to accomplish this work. Article servers and accessible antecedent software (OpenStack, Kubernetes, Ceph, etc.) is all that is needed. Beyond that one charge specify accumulator and compute capacity. Let’s aloof bandy some numbers out there:
If any accouterments vendors out there are account this, we would be blessed if you could accord us a asperous amount estimate.
architectural graphic design programming diagrams – Google Search … – google program for architecture | google program for architecture
Why You Should Not Go To Google Program For Architecture | google program for architecture – google program for architecture | Pleasant to our blog, on this time period I’m going to explain to you regarding keyword. Now, this is actually the first graphic:
レンダリング ワークロードのセキュリティ保護 | ソリューション | Google Cloud – google program for architecture | google program for architecture
Think about graphic earlier mentioned? is actually of which wonderful???. if you think maybe thus, I’l l provide you with several impression again down below:
So, if you would like obtain these outstanding shots related to (Why You Should Not Go To Google Program For Architecture | google program for architecture), click on save link to store the pics in your computer. They’re all set for obtain, if you’d rather and want to take it, click save logo on the article, and it’ll be directly down loaded in your notebook computer.} At last if you desire to gain new and the latest image related to (Why You Should Not Go To Google Program For Architecture | google program for architecture), please follow us on google plus or bookmark this blog, we try our best to present you daily update with all new and fresh pics. We do hope you enjoy staying right here. For many upgrades and latest news about (Why You Should Not Go To Google Program For Architecture | google program for architecture) photos, please kindly follow us on tweets, path, Instagram and google plus, or you mark this page on book mark area, We attempt to give you up grade periodically with all new and fresh pics, love your surfing, and find the ideal for you.
Thanks for visiting our site, articleabove (Why You Should Not Go To Google Program For Architecture | google program for architecture) published .  Today we are excited to declare that we have found an awfullyinteresting nicheto be pointed out, that is (Why You Should Not Go To Google Program For Architecture | google program for architecture) Many people looking for info about(Why You Should Not Go To Google Program For Architecture | google program for architecture) and certainly one of these is you, is not it?
SPACE FLEXIBILITY – Google 검색 | Urban Ideas | Pinterest | Space … – google program for architecture | google program for architecture
SPACE FLEXIBILITY – Google 검색 | Urban Ideas | Pinterest | Space … – google program for architecture | google program for architecture
Architectural Program Diagrams, Architectural, Free Engine … – google program for architecture | google program for architecture
SPACE FLEXIBILITY – Google 검색 | Urban Ideas | Pinterest | Space … – google program for architecture | google program for architecture
The GPS Architecture on Android – google program for architecture | google program for architecture
The GPS Architecture on Android – google program for architecture | google program for architecture
Architecture – Sylvia Grace Borda | artist website – google program for architecture | google program for architecture
The GPS Architecture on Android – google program for architecture | google program for architecture
Architecture – Sylvia Grace Borda | artist website – google program for architecture | google program for architecture
Singapore 2012 countdown – light show – google program for architecture | google program for architecture
architecture program diagram – Google Search | Diagrams … – google program for architecture | google program for architecture
architecture program diagram – Google Search | Diagrams … – google program for architecture | google program for architecture
architecture program diagram – Google Search | Diagrams … – google program for architecture | google program for architecture
architecture program diagram – Google Search | program … – google program for architecture | google program for architecture
United Methodist Church (1892-93) – interior, rafter detail – google program for architecture | google program for architecture
architecture program diagram – Google Search | program … – google program for architecture | google program for architecture
program diagrams architecture – Google Search | arch … – google program for architecture | google program for architecture
United Methodist Church (1892-93) – interior, rafter detail – google program for architecture | google program for architecture
The post Why You Should Not Go To Google Program For Architecture | google program for architecture appeared first on Home Devise.
from WordPress https://homedevise.com/why-you-should-not-go-to-google-program-for-architecture-google-program-for-architecture/
0 notes
kurano · 6 years ago
Quote
Cephは、OpenStackとコンテナをベースとするプラットホームを構築するベンダーにとって重要なビルディングブロックだ。実はOpenStackのユーザーの2/3がCephをメインで使っており、またCephはRookの中核的な部分でもある。Cloud Native Computing Foundation(CNCF)傘下のRookは、Kubernetesベースのアプリケーションのためのストレージサービスを、より容易に構築できるためのプロジェクトだ。このように、今や多様な世界に対応しているCephだから、ニュートラルな管理機関としてのファウンデーションを持つことは理にかなっている。でも、ぼくの山勘では、OpenStack Foundationもこのプロジェクトをホストしたかったのではないかな。
分散ストレージCephが独自のオープンソースファウンデーションを設立しLinux Foundationに参加 | TechCrunch Japan
0 notes
un-enfant-immature · 6 years ago
Text
The Ceph storage project gets a dedicated open-source foundation
Ceph is an open source technology for distributed storage that gets very little public attention but that provides the underlying storage services for many of the world’s largest container and OpenStack deployments. It’s used by financial institutions like Bloomberg and Fidelity, cloud service providers like Rackspace and Linode, telcos like Deutsche Telekom, car manufacturers like BMW and software firms like SAP and Salesforce.
These days, you can’t have a successful open source project without setting up a foundation that manages the many diverging interests of the community and so it’s maybe no surprise that Ceph is now getting its own foundation. Like so many other projects, the Ceph Foundation will be hosted by the Linux Foundation.
“While early public cloud providers popularized self-service storage infrastructure, Ceph brings the same set of capabilities to service providers, enterprises, and individuals alike, with the power of a robust development and user community to drive future innovation in the storage space,” writes Sage Weil, Ceph co-creator, project leader, and chief architect at Red Hat for Ceph. “Today’s launch of the Ceph Foundation is a testament to the strength of a diverse open source community coming together to address the explosive growth in data storage and services.”
Given its broad adoption, it’s also no surprise that there’s a wide-ranging list of founding members. These include Amihan Global, Canonical, CERN, China Mobile, Digital Ocean, Intel, ProphetStor Data Service, OVH Hosting Red Hat, SoftIron, SUSE, Western Digital, XSKY Data Technology and ZTE. It’s worth noting that many of these founding members were already part of the slightly less formal Ceph Community Advisory Board.
“Ceph has a long track record of success what it comes to helping organizations with effectively managing high growth and expand data storage demands,” said Jim Zemlin, the executive director of the Linux Foundation. “Under the Linux Foundation, the Ceph Foundation will be able to harness investments from a much broader group to help support the infrastructure needed to continue the success and stability of the Ceph ecosystem.”
Ceph is an important building block for vendors who build both OpenStack- and container-based platforms. Indeed, two-thirds of OpenStack users rely on Ceph and it’s a core part of Rook, a Cloud Native Computing Foundation project that makes it easier to build storage services for Kubernetes-based applications. As such, Ceph straddles many different worlds and it makes sense for the project to gets its own neutral foundation now, though I can’t help but think that the OpenStack Foundation would’ve also liked to host the project.
Today’s announcement comes only days after the Linux Foundation also announced that it is now hosting the GraphQL Foundation.
Facebook’s GraphQL gets its own open-source foundation
0 notes