#kubernetes secrets tutorial
Explore tagged Tumblr posts
Text
Kubernetes Secrets Tutorial for Devops Beginners and Students Â
Full Video Link https://youtube.com/shorts/VXQSE4ftbtc Hi, a new #video on #kubernetes #secrets is published on #codeonedigest #youtube channel. Learn #kubernetessecrets #node #docker #container #cloud #aws #azure #programming #coding
In Kubernetes, Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Using a Secret means that you donât need to include confidential data in your application code. As the Secrets are created independently of the Pods that uses them, there is less risk of the Secret being exposed during the workflow of creating, viewing, and editingâŠ
View On WordPress
#kubernetes#kubernetes explained#kubernetes installation#kubernetes interview questions#kubernetes operator#kubernetes orchestration tutorial#kubernetes overview#kubernetes secrets#kubernetes secrets and configmaps#kubernetes secrets as environment variables#kubernetes secrets best practices#kubernetes secrets encryption#kubernetes secrets management#kubernetes secrets spring boot#kubernetes secrets tutorial#kubernetes secrets vault#kubernetes tutorial#kubernetes tutorial for beginners
0 notes
Text
Mastering Azure Kubernetes Secrets Management and Ingress Control
Introduction This tutorial provides a comprehensive, hands-on guide to mastering secrets management and Ingress controllers in Azure Kubernetes Service (AKS). We will cover the fundamental concepts, implementation guides, code examples, best practices, testing, and debugging. By the end of this tutorial, you will have a deep understanding of how to securely manage secrets and configure IngressâŠ
0 notes
Text
Unlocking the Secrets of Learning DevOps Tools
In the ever-evolving landscape of IT and software development, DevOps has emerged as a crucial methodology for improving collaboration, efficiency, and productivity. Learning DevOps tools is a key step towards mastering this approach, but it can sometimes feel like unraveling a complex puzzle. In this blog, we will explore the secrets to mastering DevOps tools and navigating the path to becoming a proficient DevOps practitioner.
Learning DevOps tools can seem overwhelming at first, but with the right approach, it can be an exciting and rewarding journey. Here are some key steps to help you learn DevOps tools easily: DevOps training in Hyderabad Where traditional boundaries fade, and a unified approach to development and operations emerges.
1. Understand the DevOps culture: DevOps is not just about tools, but also about adopting a collaborative and iterative mindset. Start by understanding the principles and goals of DevOps, such as continuous integration, continuous delivery, and automation. Embrace the idea of breaking down silos and promoting cross-functional teams.
2. Begin with foundational knowledge: Before diving into specific tools, it's important to have a solid understanding of the underlying technologies. Get familiar with concepts like version control systems (e.g., Git), Linux command line, network protocols, and basic programming languages like Python or Shell scripting. This groundwork will help you better grasp the DevOps tools and their applications.
3. Choose the right tools: DevOps encompasses a wide range of tools, each serving a specific purpose. Start by identifying the tools most relevant to your requirements. Some popular ones include Jenkins, Ansible, Docker, Kubernetes, and AWS CloudFormation. Don't get overwhelmed by the number of tools; focus on learning a few key ones initially and gradually expand your skill set.
4. Hands-on practice: Theory alone won't make you proficient in DevOps tools. Set up a lab environment, either locally or through cloud services, where you can experiment and work with the tools. Build sample projects, automate deployments, and explore different functionalities. The more hands-on experience you gain, the more comfortable you'll become with the tools
Elevate your career prospects with our DevOps online course â because learning isnât confined to classrooms, it happens where you are
5. Follow official documentation and online resources: DevOps tools often have well-documented official resources, including tutorials, guides, and examples. Make it a habit to consult these resources as they provide detailed information on installation procedures, configuration setup, and best practices. Additionally, join online communities and forums where you can ask questions, share ideas, and learn from experienced practitioners.
6. Collaborate and work with others: DevOps thrives on collaboration and teamwork. Engage with fellow DevOps enthusiasts, attend conferences, join local meetups, and participate in online discussions. By interacting with others, you'll gain valuable insights, learn new techniques, and expand your network. Collaborative projects or open-source contributions will also provide a platform to practice your skills and learn from others.
7. Stay updated: The DevOps landscape evolves rapidly, with new tools and practices emerging frequently. Keep yourself updated with the latest trends, technological advancements, and industry best practices. Follow influential blogs, read relevant articles, subscribe to newsletters, and listen to podcasts. Being aware of the latest developments will enhance your understanding of DevOps and help you adapt to changing requirements.
Mastering DevOps tools is a continuous journey that requires dedication, hands-on experience, and a commitment to continuous learning. By understanding the DevOps landscape, identifying core tools, and embracing a collaborative mindset, you can unlock the secrets to becoming a proficient DevOps practitioner. Remember, the key is not just to learn the tools but to leverage them effectively in creating streamlined, automated, and secure development workflows.
0 notes
Text
In this tutorial, Iâll take you through the steps to install minikube on Ubuntu 22.04|20.04|18.04 Linux system. To those new to minikube, letâs start with an introduction before diving to the installation steps. Minikube is an open source tool that was developed to enable developers and system administrators to run a single cluster of Kubernetes on their local machine. Minikube starts a single node kubernetes cluster locally with small resource utilization. This is ideal for development tests and POC purposes. For CentOS, check out: Installing Minikube on CentOS 7/8 with KVM In a nutshell, Minikube packages and configures a Linux VM, then installs Docker and all Kubernetes components into it. Minikube supports Kubernetes features such as: DNS NodePorts ConfigMaps and Secrets Dashboards Container Runtime: Docker, CRI-O, and containerd Enabling CNI (Container Network Interface) Ingress PersistentVolumes of type hostPath Hypervisor choice for Minikube: Minikube supports both VirtualBox and KVM hypervisors. This guide will cover both hypervisors. Step 1: Update system Run the following commands to update all system packages to the latest release: sudo apt update sudo apt install apt-transport-https sudo apt upgrade If a reboot is required after the upgrade then perform the process. [ -f /var/run/reboot-required ] && sudo reboot -f Step 2: Install KVM or VirtualBox Hypervisor For VirtualBox users, install VirtualBox using: sudo apt install virtualbox virtualbox-ext-pack KVM Hypervisor Users For those interested in using KVM hypervisor, check our guide on how to Install KVM on CentOS / Ubuntu / Debian Then follow How to run Minikube on KVM instead. Step 3: Download minikube on Ubuntu 22.04|20.04|18.04 You need to download the minikube binary. I will put the binary under /usr/local/bin directory since it is inside $PATH. wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 chmod +x minikube-linux-amd64 sudo mv minikube-linux-amd64 /usr/local/bin/minikube Confirm version installed $ minikube version minikube version: v1.25.2 commit: 362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7 Step 4: Install kubectl on Ubuntu We need kubectl which is a command line tool used to deploy and manage applications on Kubernetes: curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl Make the kubectl binary executable. chmod +x ./kubectl Move the binary in to your PATH: sudo mv ./kubectl /usr/local/bin/kubectl Check version: $ kubectl version -o json --client "clientVersion": "major": "1", "minor": "24", "gitVersion": "v1.24.1", "gitCommit": "3ddd0f45aa91e2f30c70734b175631bec5b5825a", "gitTreeState": "clean", "buildDate": "2022-05-24T12:26:19Z", "goVersion": "go1.18.2", "compiler": "gc", "platform": "linux/amd64" , "kustomizeVersion": "v4.5.4" Step 5: Starting minikube on Ubuntu 22.04|20.04|18.04 Now that components are installed, you can start minikube. VM image will be downloaded and configure d for Kubernetes single node cluster. $ minikube start Starting local Kubernetes v1.10.0 cluster... Starting VM... Downloading Minikube ISO 150.53 MB / 150.53 MB [============================================] 100.00% 0s Getting VM IP address... Moving files into cluster... Downloading kubeadm v1.10.0 Downloading kubelet v1.10.0 Finished Downloading kubeadm v1.10.0 Finished Downloading kubelet v1.10.0 Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file. Wait for the download and setup to finish then confirm that everything is working fine. Step 6: Minikube Basic operations To check cluster status, run: $ kubectl cluster-info Kubernetes master is running at https://192.168.39.117:8443
KubeDNS is running at https://192.168.39.117:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Note that Minikube configuration file is located under ~/.minikube/machines/minikube/config.json To View Config, use: $ kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority: /home/jmutai/.minikube/ca.crt server: https://192.168.39.117:8443 name: minikube contexts: - context: cluster: minikube user: minikube name: minikube current-context: minikube kind: Config preferences: users: - name: minikube user: client-certificate: /home/jmutai/.minikube/client.crt client-key: /home/jmutai/.minikube/client.key To check running nodes: $ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready master 13m v1.10.0 Access minikube VM using ssh: $ minikube ssh _ _ _ _ ( ) ( ) ___ ___ (_) ___ (_)| |/') _ _ | |_ __ /' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\ | ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/ (_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____) $ sudo su - To stop a running local kubernetes cluster, run: $ minikube stop To delete a local kubernetes cluster, use: $ minikube delete Step 7: Enable Kubernetes Dashboard Kubernete ships with a web dashboard which allows you to manage your cluster without interacting with a command line. The dashboard addon is installed and enabled by default on minikube. $ minikube addons list - addon-manager: enabled - coredns: disabled - dashboard: enabled - default-storageclass: enabled - efk: disabled - freshpod: disabled - heapster: disabled - ingress: disabled - kube-dns: enabled - metrics-server: disabled - registry: disabled - registry-creds: disabled - storage-provisioner: enabled To open directly on your default browser, use: $ minikube dashboard To get the URL of the dashboard $ minikube dashboard --url http://192.168.39.117:30000 Access Kubernetes Dashboard by opening the URL on your favorite browser. For further reading, check: Hello Minikube Series: https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/ Minikube guides for newbies: https://kubernetes.io/docs/getting-started-guides/minikube/
0 notes
Photo
New Post has been published on https://dev-ops-notes.ru/blog/2017/11/29/how-to-integrate-zendesk-mobile-sdk-with-firebase-using-aws-lambda-or-google-cloud-functions/?utm_source=TR&utm_medium=andrey-v-maksimov&utm_campaign=SNAP%2Bfrom%2BDev-Ops-Notes.RU
How to integrate Zendesk Mobile SDK with Firebase using AWS Lambda or Google Cloud Functions?
Everybody knows, that you may authenticate you users for Zendesk Mobile SDK using JWT (JSON Web Token). More over, thereâre a lot of HOWTO-s, which are showing JWT implementation for many different programming languages. In this tutorial Iâll show you, how to use Google Cloud Functions, NodeJS with some additional npm additions to create a fully scalable and absolutely free Serverless JWT authentication backend for Zendesk Mobile SDK.
Why Google?
Of cause you may use AWS Lambda functions to implement the similar solution, but in my own opinion using a single product (Google Firebase) for iOS backend operations is much more easier, then using a couple of services from AWS. So, the main reason was Firebase.
At the same time Google gives you great logging solution for all its services, so you donât need to implement something special and reinvent the wheel. Just use single solution for all your services.
And the third one â API. Of cause in my own opinion Googleâs API is the best Iâve ever saw. Only Google provides your with the detailed explanation of most of the errors and provides you with the direct URL links to itâs console to, for example, enable the required service.
What is Serverless, Cloud functions and Lambda?
Think of it like a lightweight PaaS hosting based on container technologies with some limitations which makes this technology super fast and scalable. This hosting is storing your pieces of code which are, ready to be launched independently to solve one simple problem (call another function of web-service, save something to the database or send a email, for example), which could be solved in a short period of time.
Your piece of code is launched inside a container each time other cloud service triggers it or calls it directly via HTTP/HTTPS protocol like a traditional web service.
Why Serverless (AWS Lambda or CloudFunctions)?
We still not sparingly using the resources we need for each kind of solutions. We still using half loaded VMs to support long infrastructure scale times or for having ability to launch additional containers in Kubernetes cluster. In case of cloud weâre paying for such unused resources. Donât know about you, but I do not want doing this.
Usage of cloud functions is allowing us to use available resources, let say, more frugally and at the same time it gives us an ability to scale faster then in case of using VMs or even containers. So, with CloudFunctions we can use the nature of the Cloud without thinking about our web-service scalability.
Of cause, all cloud providers are supporting serverless technologies, so, you donât need to think about something like vendor-lock. You may easily switch your cloud provider in any time.
Serverless backend
First of all Iâm assuming, that you already have:
Google Firebase account (Traditional Google Cloud is also OK, if youâre not using Firebase) and created Project inside.
Youâve installed Firebase SDK for Cloud Functions and created the initial project structure for your cloud functions.
Youâve read about Writing HTTP cloud functions
After that youâll be easily be able to write something like this on Node.js Put the following code to you index.js file to create a cloud function called jwt_auth:
'use strict'; const functions = require('firebase-functions'); const admin = require('firebase-admin'); admin.initializeApp(functions.config().firebase); var jwt = require('jwt-simple'); var uuid = require('uuid'); var url = require('url'); var subdomain = 'dev-ops-notes'; // You Zendesk sub-domain var shared_key = '.....'; // Zendesk provided shared key exports.jwt_auth = functions.https.onRequest((req, res) => // Uncomment the following code if you want to //console.log('Request method', req.method); //console.log('Request: ', req); //console.log('Body: ', req.body); //console.log('Query: ', req.query); if (!req.body.user_token) console.error('No jwt token provided in URL'); res.status(401).send('Unauthorized'); return; const jwt_token = req.body.user_token; console.log("Verifying token..."); admin.auth().verifyIdToken(jwt_token).then(decodedIdToken => console.log('ID Token correctly decoded', decodedIdToken); let user = decodedIdToken; var displayName = user.email; if (user.displayName != null) displayName = user.displayName; var payload = iat: (new Date().getTime() / 1000), jti: uuid.v4(), name: displayName, email: user.email ; // encode var token = jwt.encode(payload, shared_key); console.log('Token', token) var redirect = 'https://' + subdomain + '.zendesk.com/access/jwt?jwt=' + token; var query = url.parse(req.url, true).query; if(query['return_to']) redirect += '&return_to=' + encodeURIComponent(query['return_to']); console.log('Redirect response', redirect) let response = "jwt": token res.status(200).send(response) return; ).catch(error => console.error('Error while verifying Firebase ID token:', error); res.status(401).send('Unauthorized'); return; ); );
In the code abode weâre importing some additional dependencies
firebase-functions â to have an ability to access to HTTP Request (req) and Response (res) objects and their properties.
firebase-admin â to have an ability to access Firebase Authentication features (like the checking of users tokens or credentials)
jwt-simple â itâs a small lib allowing us to form a right JWT response
uuid â lib for generating random UUID for JWT token for Zendesk
url â lib for parsing HTTP Request query string and processing redirect_url parameter provided to you by Zendesk to remember from what page did the user come, so we could include it in our request and pass back later
Checking for existence of user_token parameter inside HTTP Request and responding 401Â Unauthorized if we did not find that parameter.
After that weâre verifying Firebase user token inside our request using verifyIdToken method, which is returning us Firebase user information in case of success.
After that weâre forming JWT response structure (see Anatomy of a JWT request for more details), adding return_to information from the Zendesk request and sending back 200 OK HTTP Response with the body containing our JWT token.
Now itâs time to go to the functions directory and install all the required dependencies:
$ npm install firebase-functions $ npm install firebase-admin $ npm install jwt-simple $ npm install uuid $ npm install url
Now youâre ready to deploy your cloud function using the command:
firebase deploy --only functions
At the command output youâll see the function URL, which weâd need to provide to Zendesk Mobile SDK configuration at the next step (something like us-central1-<your-firebase-project-id>.cloudfunctions.net).
Zendesk configuration
First of all you need to Enable Mobile SDK at you account admin page:
Then we need to go to settings to Mobile SDK configuration and click âAdd Appâ button
At the Mobile App Settings do the following:
Fill the Name of your application at Setup tab and enable JWT Authentication method.
Fill JWT URL with the URL youâve got during cloud function deployment.
Put the JWT Secret to the shared_key variable and deploy the function once more again to update it with the same command youâve already used.
Enable Zendesk Guide and Conversations support if needed at Support SDK tab.
Now, youâre able to use Zendesk Mobile SDK in your iOS application.
Using Zendesk Mobile SDK with JWT Authentication
Iâll not duplicate this great Zendesk tutorial, just watch the video and follow the next steps to embed Zendesk Support in your mobile app.
Will add just a few things here.
If you want to embed Zendesk Support as UITabBarItem, follow this tutorial:Â Quick start â Support SDK for iOS
If you want to use Zendesk Support as usual UIViewController, just use this code to launch it:
URLProtocol.registerClass(ZDKAuthenticationURLProtocol.self) let jwtUserIdentity = ZDKJwtIdentity(jwtUserIdentifier:idToken) ZDKConfig.instance().userIdentity = jwtUserIdentity let helpCenterContentModel = ZDKHelpCenterOverviewContentModel.defaultContent() ZDKHelpCenter.presentOverview(self, with: helpCenterContentModel)
Letâs come back to JWT Authentication in iOS App.
Full process of JWT Authentication process is shown here: Building a dedicated JWT endpoint for the Support SDK. This article is very important, because it shows how to debug the authentication process using curl, if something goes wrong.
IMPORTANT: The common mistake in most cases is misconfigured JWT token, which is usually not containing this 4 MUST HAVE fields:
iat
jti
name
email
Next, you need to provide current user information to Zendesk before launching Zendesk Support UIViewController. If youâre using Firebase as Authentication backend for your users in the app, just use the following code for example inside âGet Supportâ UIButton action:
if let currentUser = Auth.auth().currentUser currentUser.getTokenForcingRefresh(true, completion: (idToken, error) in if let error = error debugPrint("Error obtaining user token: %@", error) else URLProtocol.registerClass(ZDKAuthenticationURLProtocol.self) let jwtUserIdentity = ZDKJwtIdentity(jwtUserIdentifier:idToken) ZDKConfig.instance().userIdentity = jwtUserIdentity // Create a Content Model to pass in let helpCenterContentModel = ZDKHelpCenterOverviewContentModel.defaultContent() ZDKHelpCenter.presentOverview(self, with: helpCenterContentModel) )
Here weâre getting current user token (idToken) from the Firebase, configuring ZDKJwtIdentity object and providing it to Zendesk Support View (helpCenterContentModel) before launching it.
Thatâs it. Now youâre ready to provide professional support for your users using the most exciting Support platform ever!
8 notes
·
View notes
Text
Understand how K8s Components work together | Complete Application Deployment using Kubernetes Components
It's a hands-on, practical tutorial of deploying an application using different Kubernetes Components to REALLY understand how these components fit in together and how you can use them in your application setup.
So, instead of creating each component separate without context, this video goes through a complete application setup using pod, deployment, service, configmap and secret. Referencing diagrams, which show the browser request flow through these components will further help you understand the whole flow.
â
In detail we create the following components.
1) MongoDB Deployment
Creating the database container/pod, in which the mongodb runs.
2) Secret
Creating the Secret component, where the username and password are stored.
3) Internal Service
Creating the Service component for MongoDB to be accessible by other Kubernetes components.
4) Mongo Express Deployment
Creating the Mongo Express container/pod, in which the web application runs.
5) ConfigMap
Creating the ConfigMap component, where the MongoDB URL is stored.
6) External Service
Creating the external service component for Mongo Express to be accessible from outside the kubernetes cluster (from the browser)
â
This video was actually inspired from a viewers feedback. So I hope it helps in getting a bigger picture! đđ€
submitted by /u/Techworld_with_Nana [link] [comments] from Software Development - methodologies, techniques, and tools. Covering Agile, RUP, Waterfall + more! https://ift.tt/2TaJA9w via IFTTT
0 notes
Photo
D3.js 5.0, and an introduction to functional programming in JS
#378 â March 23, 2018
Read on the Web
JavaScript Weekly
D3.js 5.0 Released â D3 continues to be a fantastic choice for data visualization with JavaScript. Changes in 5.0 include using promises to load data instead of callbacks, contour plots, and density contours.
Mike Bostock
Lazy Loading Modules with ConditionerJS â Linking JavaScript functionality to DOM elements can become a tedious task. See how ConditionerJS makes progressive enhancement easier in this thorough tutorial.
Smashing Magazine
The Best JavaScript Debugging Tools for 2018 â If you work with JavaScript, youâll know that it doesnât always play nice. Here we look at the best JavaScript debugging tools you can use to clean up your code and provide great software experiences to your users.
RAYGUN sponsor
â¶Â  A 10 Video Introduction to Functional JavaScript with Ramda â Want to get started with functional programming in JavaScript? Ramda is a more functional alternative to libraries like Lodash, and these brief videos cover the essentials. You may also appreciate Kyle Simpsonâs Functional-Light JavaScript if you set off on the functional programming journey.
James Moore
JavaScript vs. TypeScript vs. ReasonML: Pros and Cons â Dr. Axel is becoming a fan of static typing for larger projects and explains the pros and cons of it and how static typing relates to the TypeScript and ReasonML projects.
Dr. Axel Rauschmayer
A Proposal for Package Name Maps for ES Modules â Or how to solve the webâs âbare import specifierâ problem.
Domenic Denicola
A TC39 Proposal for Object.fromEntries â It would transform a list of key/value pairs into an object.
TC39 news
How Unsplash Gradually Migrated to TypeScript
Oliver Joseph Ash
đ» Jobs
Engineering Manager â Youâll lead a team in building a product at scale and get the opportunity to manage and mentor while helping shape decisions.
Skillshare
Software Engineer at Fat Lama (London) â Technology and engineering is at the heart of what we do at Fat Lama - help us build the rental marketplace for everything.
Fat Lama
JavaScript Expert? Sign Up for Vettery â Create your profile and weâll connect you with top companies looking for talented front-end developers.
Vettery
Place your own job listing in a future issue
đ Tutorials & Tips
Getting Started with the Web MIDI API â Covers the basics of MIDI and the Web MIDI API showing how simple it is to create frontend apps that respond to musical inputs. Itâs niche but also neat the Web platform can do this.
Peter Anglea
â¶Â  7 Secret Patterns Vue Consultants Don&'t Want You to Know â Clickbaity talk title, but Chris is both on the Vue core team and a great speaker :-)
Chris Fritz
Learn to Build JavaScript Apps with MongoDB in M101JS, MongoDB for Node Developers â MongoDB University courses are free and give you everything you need to know about MongoDB.
MongoDB sponsor
How to Write Powerful Schemas in JavaScript â An introduction to schm, a library for building model schemas in a functional, composable way.
Diego Haz
Getting Smaller Lodash Bundles with Webpack and Babel â Plus some tips for working with lodash-webpack-plugin.
Nolan Lawson
Elegant Patterns in Modern JavaScript: RORO â RORO stands for Receive an Object, Return an Object.
Bill Sourour
The Ultimate Angular CLI Reference Guide â Create new Angular 2+ apps, scaffold components, run tests, build for production, and more.
Jurgen Van de Moere
â¶Â  Add ESLint and Prettier to VS Code for 'Create React App' Apps
Elijah Manor
Tips for Using ESLint in a Legacy Codebase â Techniques that can help you significantly reduce the number of errors you see.
Sheshbabu Chinnakonda
Free eBook: A Roundup of Managed Kubernetes Platforms
Codeship sponsor
Lookaheads (and Lookbehinds) in JS Regular Expressions
Stefan Judis
Unblocking Clipboard Access in Chrome 66+ â The Async Clipboard API supersedes the document.execCommand approach.
Jason Miller
Building Office 365/SharePoint Applications with Aurelia
Magnus Danielson
đ§ Code and Tools
GPU-Accelerated Neural Networks in JavaScript â A look at four libraries providing this type of functionality.
Sebastian Kwiatkowski
Get the Best, Most Complete Collection of Angular UI Controls: Wijmo â Wijmoâs dependency-free UI controls include rich declarative markup, full IntelliSense, and the best data grid.
GrapeCity Wijmo sponsor
better-sqlite3: A Simple, Fast SQLite3 Library for Node
Joshua Wise
ngx-datatable: A Feature-Rich Data-Table Component for Angular â No external dependencies. Demos here.
Swimlane
vue-content-loader: SVG-based 'Loading Placeholder' Component â Itâs a port of ReactContentLoader.
EGOIST
DrawerJS: A Customizable HTML Canvas Drawing Tool â Live demo.
Carsten SchÀfer
by via JavaScript Weekly https://ift.tt/2pzqNa9
0 notes
Text
2018-03-08 18 LINUX now
LINUX
Linux Academy Blog
AWS Security Essentials has been released!
Employee Spotlight: Sara Currie, Technical Recruiter
Linux Academy Weekly Roundup 108
Free SSL with Letâs Encrypt & NGINX
Michelle Gill â Becoming V.P. of Engineering
Linux Insider
Kali Linux Security App Lands in Microsoft Store
Microsoft Gives Devs More Open Source Quantum Computing Goodies
Red Hat Adds Zing to High-Density Storage
When It's Time for a Linux Distro Change
Endless OS Helps Tear Down Linux Wall
Linux Journal
Building a March Madness Bracket in PHP
Exim Vulnerability, GitHub Open-Sources Licensed, The Khronos Group Releases Vulkan 1.1 and More
Last chance: Subscribe now to get the highly anticipated comeback issue!
Best Laptop for Running Linux
diff -u: Linus Posting Habits
Linux Magazine
OpenStack Queens Released
Kali Linux Comes to Windows
Ubuntu to Start Collecting Some Data with Ubuntu 18.04
CNCF Illuminates Serverless Vision
LibreOffice 6.0 Released
Linux Today
Linux nl Command Tutorial for Beginners (7 Examples)
FreeTube - An Open Source Desktop YouTube Player For Privacy-minded People
Host your own email with projectx/os and a Raspberry Pi
Google Chrome 65 Now Rolling Out to Android Devices to Fight Malvertising
How to install ERPNext on Debian 9
Linux.com
LFS458 Kubernetes Administration
China SDN/NFV Conference
Protecting Code Integrity with PGP â Part 4: Moving Your Master Key to Offline Storage
One Week Until Embedded Linux Conference + OpenIoT Summit in Portland: Will You Join Us?
âKubernetes Graduates to Full-Fledged, Open-Source Program
Reddit Linux
Linux Networking Dietary Restrictions
Distros that randomise MAC address
This is a great idea - Using Android apps inside Linux with Anbox
Meet Indiaâs women Open Source warriors
Apple's top-secret iBoot firmware source code spills onto GitHub for some insane reason
Riba Linux
How to install SwagArch GNU/Linux 18.03
SwagArch GNU/Linux 18.03 overview | A simple and beautiful Everyday Desktop
How to install Nitrux 1.0.9
Nitrux 1.0.9 overview | Change The Rules
Pixel OS 1.0 "Apu" Public Beta 1 overview | Meet Pixel OS
Slashdot Linux
NASA Spacecraft Reveals Jupiter's Interior In Unprecedented Detail
Most Americans Think AI Will Destroy Other People's Jobs, Not Theirs
Samsung's New TVs Are Almost Invisible
California Becomes 18th State To Consider Right To Repair Legislation
Oculus Rift Headsets Are Offline Following a Software Error
Softpedia
Mozilla Firefox 58.0.2 / 59.0 Beta 14
Evolution 3.26.6
Evolution 3.28.0 RC
Evolution Data Server 3.26.6 / 3.28.0 RC
Evolution Mapi 3.26.6 / 3.28.0 RC
Tecmint
How to Install Particular Package Version in CentOS and Ubuntu
How to Enable and Disable Root Login in Ubuntu
8 Best Tools to Access Remote Linux Desktop
How to Install NetBeans IDE 8.2 in Debian, Ubuntu and Linux Mint
How to Install NetBeans IDE in CentOS, RHEL and Fedora
nixCraft
400K+ Exim MTA affected by overflow vulnerability on Linux/Unix
Book Review: SSH Mastery â OpenSSH, PuTTY, Tunnels & Keys
How to use Chomper Internet blocker for Linux to increase productivity
Linux/Unix desktop fun: Simulates the display from âThe Matrixâ
Ubuntu 17.10 no longer available for download due to LENOVO bios getting corrupted
0 notes
Text
Backing Up Percona Kubernetes Operator for Percona XtraDB Cluster Databases to Google Cloud Storage
The Percona Kubernetes Operator for Percona XtraDB Cluster can send backups to Amazon S3 or S3-compatible storage. And every now and then at Support, we are asked how to send backups to Google Cloud Storage. Google Cloud Storage offers an âinteroperability modeâ which is S3-compatible. However, there are a few details to take care of when using it. Google Cloud Storage Configuration First, select âSettingsâ under âStorageâ in the Navigation Menu. Under Settings, select the Interoperability tab. If Interoperability is not yet enabled, click Enable Interoperability Access. This turns on the S3-compatible interface to Google Cloud Storage. After enabling S3-compatible storage, an access key needs to be generated. There are two options: Access keys can be tied to Service accounts or User accounts. For production workloads, Google recommends Service account access keys, but for this example, a User account access key will be used for simplicity. The Interoperability page links to further documentation on the differences between the two, so this article does not go into those details. To create a User account HMAC (Hash-based Message Authentication Code) keys scroll down to âUser account HMACâ and click âCreate a keyâ. This generates an access key and accompanying secret. These keys will be used as an AWS access key and secret later on. The user account also needs access to the bucket that will be used for backups. This can be set up by selecting the bucket in Storage Browser, and going to the Permissions tab. Operator Configuration Once a key has been created and the account permissions are verified to be correct, the Percona XtraDB Cluster (PXC) Operator needs to be configured to use the new keys. First, the access key and secret need to be base64 encoded. For example: $ echo -n GOOGFJDEWQ3KJFAS | base64 R09PR0ZKREVXUTNLSkZBUw== $ echo -n IFEWw99s0+ece3SXuf9q | base64 SUZFV3c5OXMwK2VjZTNTWHVmOXE= The -n parameter to echo is important, without it a line break will also be encoded and the key wonât work. Next, the base64-encoded values need to be stored in the deploy/backup-s3.yaml file in the PXC Operator directory as the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY like this: $ cat deploy/backup-s3.yaml apiVersion: v1 kind: Secret metadata:  name: my-test-backup-s3 type: Opaque data:  AWS_ACCESS_KEY_ID: R09PR0ZKREVXUTNLSkZBUw==  AWS_SECRET_ACCESS_KEY: SUZFV3c5OXMwK2VjZTNTWHVmOXE= After modifying the file, the secrets need to be stored in Kubernetes using: $ kubectl apply -f deploy/backup-s3.yaml In the cr.yaml of PXC Operator the backup destination is defined as follows:   storages:    s3-us-central1:     type: s3     s3:      bucket: my-test-bucket      credentialsSecret: my-test-backup-s3      region: us-central1      endpointUrl: https://storage.googleapis.com/ bucket is the name of the bucket as created in Google Cloud Storage, credentialsSecret must match the entry in backup-s3.yaml. endpointUrl is the âStorage URIâ as shown in the Interoperability tab of Google Cloud Storage. Now that the backup destination has been defined, to take an on-demand backup the backup/backup.yaml file needs to be modified: apiVersion: pxc.percona.com/v1 kind: PerconaXtraDBClusterBackup metadata:  name: my-test-backup spec:  pxcCluster: cluster1  storageName: s3-us-central1 Here pxcCluster needs to match the name of the cluster, and storageName needs to match the entry in cr.yaml. After modifying the file an on-demand backup can be started using: $ kubectl apply -f deploy/backup/backup.yml From here on the documentation for PXC Operator at https://www.percona.com/doc/kubernetes-operator-for-pxc/backups.html can be followed, since after configuring the Google Cloud Storage destination taking and restoring backups works exactly as it does when using Amazon S3. Conclusion As you can see, using Google Cloud Storage together with Percona Kubernetes Operator for Percona XtraDB Cluster is not difficult at all, but few details are slightly different from Amazon S3. Be sure to get in touch with Perconaâs Training Department to schedule a hands-on tutorial session with our K8S Operator. Our instructors will guide you and your team through all the setup processes, learn how to take backups, handle recovery, scale the cluster, and manage high-availability with ProxySQL. Percona XtraDB Cluster is a cost-effective and robust clustering solution created to support your business-critical data. It gives you the benefits and features of MySQL along with the added enterprise features of Percona Server for MySQL. Download Percona XtraDB Cluster Datasheet https://www.percona.com/blog/2020/07/20/backing-up-percona-kubernetes-operator-for-percona-xtradb-cluster-databases-to-google-cloud-storage/
0 notes
Text
What is Kubernetes
So What is Kubernetes?
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and servicesâwith a framework to run distributed systems resiliently. It takes care of your scaling requirements, failover, deployment patterns, scaling, load balancing, logging, and monitoring, much like PaaS offerings. However, it operates at the container level rather than at the hardware level.
 It was initially built upon a decade and a half of the Google experience running production workloads. Open-sourced in 2014, Kubernetes is now a growing ecosystem that combines best practices for application deployment to run some of the largest software services by scale.
 The name Kubernetes is derived from a Greek term meaning âhelmsmanâ or âpilot.â True to this word, Kubernetes provides the guiding force for developer platforms to transition from virtual machines (VMs) to containers and the statically scheduled to the dynamically scheduled. This means no more manual integration and configuration when you move from a testing environment to an actual production environment or from on-premise to the cloud! The Kubernetes logical compute environment offers common services to all the applications in the cluster as part of the ecosystem for the software to run consistently.
 Kubernetes: The Container Orchestration Tool
Kubernetes allows you to manage hundreds of containers and clusters of hosts on which containers are executed. When you deploy your containerized applications to a group of computers, Kubernetes automates their distribution and scheduling, working as an orchestration platform to simplify the work of technical teams.
 Particularly, in instances when you need to manage applications with hundreds of containers spread across multiple hosts, a container orchestration tool like Kubernetes manages the workloads in a compute cluster, connecting to the outside world for scheduling, load balancing, and distribution.
 The Kubernetes DevOps Tool
The container orchestration capability of Kubernetes closes the gap between IT operations and development, making a great collaborative DevOps environment for sharing software and their dependencies seamlessly between different environments.
 It facilitates the software lifecycle and the enabler teams in the build-test-deploy timeline:
 Developer environment, by helping to run the software in any setting
QA/Testing process, through coordinated pipelines between test and production
Sys-admin, by running anything once configured
Operations, by offering a comprehensive solution for building, shipping, and scaling software
Kubernetes has emerged as a good actor in DevOps as it focuses on features and bugs rather than time-intensive tasks to enable better software to be shipped with a smooth DevOps workflow.
 Benefits of Using Kubernetes
Although we have several tools in DevOps that are equally popular like the Docker, Kubernetes wins the votes. This is because of the many benefits that far outweigh other tools.
 Among its many attributes, Kubernetes:
 Lays the foundations for developing and building cloud-native applications that can run anywhere, independent of cloud requirements
Speeds up the process of building, testing, and releasing software
Has the ability to handle scaling-up of both applications and infrastructure in real-time
Tackles workload scalability on the fly
Controls resource consumption and hardware use
Balances application load across the host infrastructure
Moves an application to another host in the event of resource shortage
Facilitates easy rollbacks
Tests and auto-corrects applications
Delivers the software quickly with better compliance
Increases transparency and collaboration within the teams and pipelines
Effectively minimizes security risk while controlling cost
Increases the efficiency of server usage
Renders health-check of your apps and self-heals with auto-placement, auto-restart, auto-replication, and auto-scaling
Can be combined with other open-source projects to orchestrate all parts of your container infrastructure
Supports better IT security
Helps manage your containerized applications more easily and quickly
Increases developer productivity
Automates patches and updates
Allows visibility for in-process and failed deployments with status query support
Saves time when a deployment is paused at any time, to be quickly resumed later
Allows version control with newer versions of application images or a rollback when the current version is not stable
Supports container balancing as it automatically places containers by computing the best location
Manages your batch and compute-intensive (CI) workloads for efficient batch execution
Reduces the time to onboard new projects and applications
The benefits of Kubernetes extend beyond the development, testing, and production environment to perform mission-critical tasks in large-scale businesses.
 Features of Kubernetes
Kubernetes offers the widest range of features required to deploy containerized applications.
 Portable and Open-Sourced
As an open-source platform, Kubernetes can run containers on any number of public clouds, virtual machines, or infrastructures. Its compatibility with most platforms makes it highly flexible and usable.
 Programming Language and Framework Support
Kubernetes supports most programming languages and frameworks.
 Automatic Resource Bin Packing
The application is packaged, and the containers scheduled based on available resources, allowing optimal utilization of unused resources. As Kubernetes enables you to specify the CPU and RAM needs of each container, the containers can be slotted to increase compute efficiency and ultimately lowers costs. Â
 Container Deployment Control
Kubernetes allows complete control over the number of containers you want with deployment and keeps those containers ready with a rollout. Thus, you can automate Kubernetes to create new containers, remove existing containers, or adopt all of their resources to a new container.
 DevOps Engineer Master's Program
Bridge between software developers and operationsEXPLORE COURSE
Automated Rollouts and Rollbacks
Versions and updates are automated and running, so you donât waste time or resources on downtime. Also, the health of the application is screened during rollout to automatically rollback in the case of any glitch or failure.
 Health Checks and Self-healing
It checks the health of nodes and containers to ensure than an application doesnât fail. In case of a pod crash or an error, Kubernetes automatically restarts containers that fail, replaces or kills containers that donât match user-defined health checks, and doesnât make them available to clients until they are client-ready.
 Secure Configuration Management
You can store and manage user information such as passwords and SSH keys, deploy secrets and application configuration without rebuilding your container images, and do all of this without exposing secrets in your stack configuration.
 Service Discovery and Load Balancing
Kubernetes can expose a container using the DNS or IP address. For high traffic to a container, it can automatically balance the loads into the pods and distribute the network traffic for the stable deployment of software.
 This supports the distribution of load and auto-balancing of resources instantly during incidental traffic or batch processing.
 Storage Orchestration
You can automatically mount a storage system or orchestrate containers on multiple hosts.
 Auto-Scaling of Resources and Applications in Real-Time
Kubernetes offers several features for auto-scaling. You can deploy and control the number of containers based on computing resources, workload balance, and scale-out your software or create applications on more containers by grouping containers in pods. Horizontal autoscaling is another feature whereby Kubernetes auto-scalers automatically size a deploymentâs number of pods based on the usage of specified resources and at the individual server level.
 New servers can be added or removed easily. Kubernetes can thus automatically expose your containers to the internet or other containers in the cluster to automatically load balance traffic across matching containers.
 Heterogeneous Clusters
Kubernetes allows you to build your cluster with a mix of virtual machines on the cloud, on-premise, or in your data center, to suit your requirements.
 Persistent Storage Support
Kubernetes workflow includes support for Amazon Web Services EBS, Google Cloud Platform persistent disks, and other storage.
 Workload Support
Kubernetes supports a variety of workloads: stateless, stateful, data-processing.
 Application Type Support
Kubernetes offers complete support for the application types, application frameworks, and language without differentiating between apps and services.
 To get a brief understanding of the features, architecture, and working of Kubernetes, check out this Kubernetes Tutorial video -
   Takeaway
Kubernetes has emerged as the cornerstone of DevOps. Its many benefits and flexibility make it the preferred choice of companies when they want to develop, test, and deploy their products and services. Thus, more and more companies are investing in the container management system and Kubernetes.
 If youâre looking at enhancing your career prospects in DevOps or building in-depth knowledge about containerization and orchestration tools, then you must check out Simplilearnâs Certified Kubernetes Administrator (CKA) Certification Training. Learn how to build applications in containers and deploy and manage a Kubernetes cluster. Master the most trending DevOps tool, Kubernetes, to help facilitate the process of development-to-deployment.[Source]-https://www.simplilearn.com/what-is-kubernetes-article
Basic & Advanced Kubernetes Training Online using cloud computing, AWS, Docker etc. in Mumbai. Advanced Containers Domain is used for 25 hours Kubernetes Training.
0 notes
Text
Using docker-in-docker with ephemeral Jenkins workspaces
I was presented with a challenge a few months ago. -"Create a container based hybrid CI/CD pipeline that includes GKE that we can demo at Google Cloud Next â19â. Specs are always vague in these requests which suits me quite well as you get a lot of creative freedom to solve the task at hand. After listening to a talk on YouTube by Vic Iglesias I were intrigued by the idea of ephemeral workspaces that are dynamically created for each build job. As we all know, anything idling in the cloud costs money.
Hello World
The workspace is in fact a Kubernetes workload that the Jenkins master boots up for a particular build job. This was fairly easy to setup, I simply followed the setup procedures in the GKE docs and I had my echo "Hello World" up in minutes. The challenge quickly ramped up as I realized I needed to run a Docker daemon that the Jenkins agent could issue docker build against. How would you go about doing this without statically defining a DOCKER_HOST somewhere in your cloud?
Is docker-in-docker a thing for Kubernetes?
Running docker-in-docker I knew from past encounters that you could do and itâs well documented. Cobbling this together with GKE and Jenkins seemed to be a less obvious topic while googling. I realized I was using the stock Kubernetes Plugin for Jenkins to dynamically provision Jenkins agent. The plugin allows you to declare your own Pod specification, hence running a sidecar docker daemon with the Jenkins agent is quite trivial.
For reference, hereâs the full pod spec the Jenkins master eventually spawns:
--- apiVersion: v1 kind: Pod metadata: labels: jenkins: slave jenkins/cd-jenkins-slave: "true" name: default-s13mc namespace: cicd spec: containers: - env: - name: JENKINS_SECRET value: 898200a1131e649637edb5c5faa3778c541bba82b9855d139b15cca7bf3e4492 - name: JENKINS_TUNNEL value: cd-jenkins-agent:50000 - name: JENKINS_AGENT_NAME value: default-s13mc - name: JENKINS_NAME value: default-s13mc - name: JENKINS_URL value: http://cd-jenkins:8080/ - name: HOME value: /home/jenkins image: docker:18.09.3-dind imagePullPolicy: IfNotPresent name: dind securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File tty: true volumeMounts: - mountPath: /var/lib/docker name: volume-0 - mountPath: /home/jenkins name: workspace-volume - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-w7k5z readOnly: true workingDir: /home/jenkins - args: - 898200a1131e649637edb5c5faa3778c541bba82b9855d139b15cca7bf3e4492 - default-s13mc env: - name: JENKINS_SECRET value: 898200a1131e649637edb5c5faa3778c541bba82b9855d139b15cca7bf3e4492 - name: JENKINS_TUNNEL value: cd-jenkins-agent:50000 - name: JENKINS_AGENT_NAME value: default-s13mc - name: JENKINS_NAME value: default-s13mc - name: JENKINS_URL value: http://cd-jenkins:8080 - name: HOME value: /home/jenkins image: drajen/jnlp-slave:3.27-5 imagePullPolicy: IfNotPresent name: jnlp securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/docker name: volume-0 - mountPath: /home/jenkins name: workspace-volume - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-w7k5z readOnly: true workingDir: /home/jenkins dnsPolicy: ClusterFirst nodeName: gke-standard-cluster-1-default-pool-dcc3e8a4-8jvh priority: 0 restartPolicy: Never schedulerName: default-scheduler serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - emptyDir: {} name: volume-0 - emptyDir: {} name: workspace-volume - name: default-token-w7k5z secret: defaultMode: 420 secretName: default-token-w7k5z
What we can see here is the fact I can use the stock docker image from Docker, Inc with the -dind tag. In the volumeMounts section we can also see the mapping against /var/lib/docker which in turn allow the Jenkins agent to run the docker command without constraints or configuration to do so. The Jenkins team makes it very easy to build your own custom agent and throwing in the docker binary is not harder than this Dockerfile example (with some other extras sprinkled):
FROM jenkins/jnlp-slave:3.27-1 USER root RUN apt-get update && \ apt-get install -y python-pip \ apt-transport-https \ ca-certificates \ curl \ gnupg2 \ software-properties-common && \ curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - && \ add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/debian \ $(lsb_release -cs) \ stable" && \ curl -ssSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add && \ echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && \ apt-get update && \ apt-get install -y docker-ce-cli kubectl && \ pip install ansible && \ apt-get clean && \ mkdir -p /etc/ansible && \ echo "localhost ansible_connection=local" | tee -a /etc/ansible/hosts USER jenkins ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:~/.local/bin
Further I used the GKE plugin to apply the new manifest I generate with the Ansible template plugin (thereâs some history behind this choice I donât recall at the moment, but I love Ansible, maybe thatâs why).
Output
Itâs questionable how common my use case is. I would assume a more natural path for this type of pipeline would be more suitable with a cloud-provided build system, like Google Cloud Build. I was looking for a short path to victory at the time for the demo asset and Jenkins is a known variable in the equation that you can bend to your will for the most part.
The demo I put together for Google Cloud Next â19 was published to YouTube shortly after, "Using HPE Cloud Volumes with Google Kubernetes Engine with Hybrid Cloud CI/CD pipelines on Jenkinsâ:
0 notes
Text
This tutorial has been written to help you install Minikube on CentOS 8 / CentOS 7 with KVM Hypervisor. Minikube is an open source tool designed to enable developers and system administrators to bootstrap a single node Kubernetes cluster in their local machine â Laptops, Desktop workstations in minutes. This is ideal for development and POC purposes, but not for running Production workloads. For installation of Minikube on Ubuntu / Debian Linux system, check: How to install Minikube on Ubuntu / Debian Linux. In a nutshell, Minikube packages and configures a Linux VM, then installs Docker and all Kubernetes components into it. Which you can manage and deploy applications from kubectl running in the host system. Kubernetes Supported features Some of the features which you can run from Kubernetes running in Minikube are: DNS NodePorts ConfigMaps and Secrets Dashboards Container Runtime: Docker, CRI-O, and containerd Enabling CNI (Container Network Interface) Ingress PersistentVolumes of type hostPath Minikube supports both VirtualBox and KVM hypervisors., but this guide is for running Minikube with KVM on a CentOS 8 / CentOS 7 Linux machine. Step 1: Update system Run the following commands to update all system packages to the latest release: sudo yum -y update Step 2: Install KVM Hypervisor As stated earlier, weâll use KVM as Hypervisor of choice for the Minikube VM. Here is our complete guide on the installation of KVM on CentOS / RHEL 8. How To Install KVM on RHEL 8 / CentOS 8 Linux Install KVM on CentOS 7 Confirm that libvirtd service is running. $ systemctl status libvirtd â libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2020-01-20 14:33:07 EAT; 1s ago Docs: man:libvirtd(8) https://libvirt.org Main PID: 20569 (libvirtd) Tasks: 20 (limit: 32768) Memory: 70.4M CGroup: /system.slice/libvirtd.service ââ 2653 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper ââ 2654 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper ââ20569 /usr/sbin/libvirtd Jan 20 14:33:07 cent8.localdomain systemd[1]: Starting Virtualization daemon... Jan 20 14:33:07 cent8.localdomain systemd[1]: Started Virtualization daemon. Jan 20 14:33:08 cent8.localdomain dnsmasq[2653]: read /etc/hosts - 2 addresses Jan 20 14:33:08 cent8.localdomain dnsmasq[2653]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Jan 20 14:33:08 cent8.localdomain dnsmasq-dhcp[2653]: read /var/lib/libvirt/dnsmasq/default.hostsfile If not running after installation, then start and set it to start at boot. sudo systemctl enable --now libvirtd You user should be part of libvirt group. sudo usermod -a -G libvirt $(whoami) newgrp libvirt Open the file /etc/libvirt/libvirtd.conf for editing. sudo vi /etc/libvirt/libvirtd.conf Set the UNIX domain socket group ownership to libvirt, (around line 85) unix_sock_group = "libvirt" Set the UNIX socket permissions for the R/W socket (around line 102) unix_sock_rw_perms = "0770" Restart libvirt daemon after making the change. sudo systemctl restart libvirtd.service Step 3: Download minikube You need to download the minikube binary. I will put the binary under /usr/local/bin directory since it is inside $PATH. sudo yum -y install wget wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 chmod +x minikube-linux-amd64 sudo mv minikube-linux-amd64 /usr/local/bin/minikube Confirm installation of Minikube on your system. $ minikube version minikube version: v1.23.2 commit: 0a0ad764652082477c00d51d2475284b5d39ceed Step 4: Install kubectl We need kubectl which is a command-line tool used to deploy and manage applications on Kubernetes.
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl Give the file executable bit and move to a location in your PATH. chmod +x kubectl sudo mv kubectl /usr/local/bin/ Confirm the version of kubectl installed. $ kubectl version --client -o json "clientVersion": "major": "1", "minor": "22", "gitVersion": "v1.22.2", "gitCommit": "8b5a19147530eaac9476b0ab82980b4088bbc1b2", "gitTreeState": "clean", "buildDate": "2021-09-15T21:38:50Z", "goVersion": "go1.16.8", "compiler": "gc", "platform": "linux/amd64" Step 5: Starting minikube Now that components are installed, you can start minikube. VM image will be downloaded and configured for Kubernetes single node cluster. Edit Libvirtd configuration file and set group: $ sudo vim /etc/libvirt/libvirtd.conf unix_sock_group = "libvirt" unix_sock_rw_perms = "0770" Restart libvirtd daemon: sudo systemctl restart libvirtd Add your username to libvirt group: $ sudo usermod -aG libvirt $USER $ newgrp libvirt $ id uid=1000(jkmutai) gid=989(libvirt) groups=989(libvirt),10(wheel),1000(jkmutai) For a list of options, run: $ minikube start --help To create a minikube VM with the default options, run: $ minikube start The default container runtime to be used is docker, but you can also use crio or containerd: $ minikube start --container-runtime=cri-o $ minikube start --container-runtime=containerd The installer will automatically detect KVM and download KVM driver. * minikube v1.23.2 on CentOS 8.4 * Automatically selected the kvm2 driver * Downloading driver docker-machine-driver-kvm2: > docker-machine-driver-kvm2....: 65 B / 65 B [----------] 100.00% ? p/s 0s > docker-machine-driver-kvm2: 11.40 MiB / 11.40 MiB 100.00% 1.09 MiB p/s 1 * Downloading VM boot image ... > minikube-v1.23.1.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s > minikube-v1.23.1.iso: 225.22 MiB / 225.22 MiB 100.00% 103.78 MiB p/s 2.4 * Starting control plane node minikube in cluster minikube .... If you have more than one hypervisor, then specify it. $ minikube start --vm-driver kvm2 Please note that latest stable release of Kubernetes is installed. Use --kubernetes-version flag to specify version to be installed. Example: --kubernetes-version='v1.22.2' Wait for the download and setup to finish then confirm that everything is working fine. $ minikube start * minikube v1.23.2 on Centos 8.4 * Automatically selected the kvm2 driver * Downloading driver docker-machine-driver-kvm2: > docker-machine-driver-kvm2....: 65 B / 65 B [----------] 100.00% ? p/s 0s > docker-machine-driver-kvm2: 11.40 MiB / 11.40 MiB 100.00% 1.09 MiB p/s 1 * Downloading VM boot image ... > minikube-v1.23.1.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s > minikube-v1.23.1.iso: 225.22 MiB / 225.22 MiB 100.00% 103.78 MiB p/s 2.4 * Starting control plane node minikube in cluster minikube * Downloading Kubernetes v1.22.2 preload ... > preloaded-images-k8s-v13-v1...: 579.88 MiB / 579.88 MiB 100.00% 71.91 Mi * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ... * Deleting "minikube" in kvm2 ... * Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ... * Preparing Kubernetes v1.22.2 on CRI-O 1.22.0 ... - Generating certificates and keys ... - Booting up control plane ... - Configuring RBAC rules ... * Configuring bridge CNI (Container Networking Interface) ... * Verifying Kubernetes components... - Using image gcr.io/k8s-minikube/storage-provisioner:v5 * Enabled addons: storage-provisioner, default-storageclass * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default Step 6: Minikube Basic operations The kubectl command line tool is configured to use âminikubeâ. To check cluster status, run: $ minikube status minikube
type: Control Plane host: Running kubelet: Running apiserver: Running kubeconfig: Configured $ kubectl cluster-info Kubernetes master is running at https://192.168.39.2:8443 KubeDNS is running at https://192.168.39.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Your Minikube configuration file is located under ~/.minikube/machines/minikube/config.json To View Config, use: $ kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority: /home/jkmutai/.minikube/ca.crt extensions: - extension: last-update: Mon, 27 Sep 2021 00:44:49 EAT provider: minikube.sigs.k8s.io version: v1.23.2 name: cluster_info server: https://192.168.39.195:8443 name: minikube contexts: - context: cluster: minikube extensions: - extension: last-update: Mon, 27 Sep 2021 00:44:49 EAT provider: minikube.sigs.k8s.io version: v1.23.2 name: context_info namespace: default user: minikube name: minikube current-context: minikube kind: Config preferences: users: - name: minikube user: client-certificate: /home/jkmutai/.minikube/profiles/minikube/client.crt client-key: /home/jkmutai/.minikube/profiles/minikube/client.key To check running nodes: $ kubectl get nodes NAME STATUS ROLES AGE VERSION minikube Ready control-plane,master 2m53s v1.22.2 Access minikube VM using ssh: $ minikube ssh _ _ _ _ ( ) ( ) ___ ___ (_) ___ (_)| |/') _ _ | |_ __ /' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\ | ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/ (_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____) $ sudo su - # cat /etc/os-release NAME=Buildroot VERSION=2021.02.4-dirty ID=buildroot VERSION_ID=2021.02.4 PRETTY_NAME="Buildroot 2021.02.4" # exit logout $ exit logout To stop a running local kubernetes cluster, run: $ minikube stop * Stopping "minikube" in kvm2 ... * "minikube" stopped. To start VM, run: $ minikube start * minikube v1.23.2 on CentOS 8.4 * Using the kvm2 driver based on existing profile * Starting control plane node minikube in cluster minikube * Restarting existing kvm2 VM for "minikube" ... * Preparing Kubernetes v1.22.2 on CRI-O 1.22.0 ... * Configuring bridge CNI (Container Networking Interface) ... * Verifying Kubernetes components... - Using image gcr.io/k8s-minikube/storage-provisioner:v5 * Enabled addons: storage-provisioner, default-storageclass * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default To delete a local kubernetes cluster, use: $ minikube delete Step 7: Enable Kubernetes Dashboard Kubernetes ships with a web dashboard which allows you to manage your cluster without interacting with a command line. The dashboard addon is installed and enabled by default on minikube. $ minikube addons list |-----------------------------|----------|--------------|-----------------------| | ADDON NAME | PROFILE | STATUS | MAINTAINER | |-----------------------------|----------|--------------|-----------------------| | ambassador | minikube | disabled | unknown (third-party) | | auto-pause | minikube | disabled | google | | csi-hostpath-driver | minikube | disabled | kubernetes | | dashboard | minikube | disabled | kubernetes | | default-storageclass | minikube | enabled â
| kubernetes | | efk | minikube | disabled | unknown (third-party) | | freshpod | minikube | disabled | google | | gcp-auth | minikube | disabled | google |
| gvisor | minikube | disabled | google | | helm-tiller | minikube | disabled | unknown (third-party) | | ingress | minikube | disabled | unknown (third-party) | | ingress-dns | minikube | disabled | unknown (third-party) | | istio | minikube | disabled | unknown (third-party) | | istio-provisioner | minikube | disabled | unknown (third-party) | | kubevirt | minikube | disabled | unknown (third-party) | | logviewer | minikube | disabled | google | | metallb | minikube | disabled | unknown (third-party) | | metrics-server | minikube | disabled | kubernetes | | nvidia-driver-installer | minikube | disabled | google | | nvidia-gpu-device-plugin | minikube | disabled | unknown (third-party) | | olm | minikube | disabled | unknown (third-party) | | pod-security-policy | minikube | disabled | unknown (third-party) | | portainer | minikube | disabled | portainer.io | | registry | minikube | disabled | google | | registry-aliases | minikube | disabled | unknown (third-party) | | registry-creds | minikube | disabled | unknown (third-party) | | storage-provisioner | minikube | enabled â
| kubernetes | | storage-provisioner-gluster | minikube | disabled | unknown (third-party) | | volumesnapshots | minikube | disabled | kubernetes | |-----------------------------|----------|--------------|-----------------------| Enabling plugins: minikube addons enable Example: $ minikube addons enable csi-hostpath-driver ! [WARNING] For full functionality, the 'csi-hostpath-driver' addon requires the 'volumesnapshots' addon to be enabled. You can enable 'volumesnapshots' addon by running: 'minikube addons enable volumesnapshots' - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0 - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0 - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0 - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0 - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0 - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0 - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0 - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1 - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0 * Verifying csi-hostpath-driver addon... * The 'csi-hostpath-driver' addon is enabled To open directly on your default browser, use: $ minikube dashboard * Enabling dashboard ... - Using image kubernetesui/metrics-scraper:v1.0.7 - Using image kubernetesui/dashboard:v2.3.1 * Verifying dashboard health ... * Launching proxy ... * Verifying proxy health ... * Opening http://127.0.0.1:39649/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser... http://127.0.0.1:39649/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ To get the URL of the dashboard $ minikube dashboard --url http://192.168.39.117:30000 Access Kubernetes Dashboard by opening the URL on your favorite browser. For further reading, check: Hello Minikube Series:Â https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/ Minikube guides for newbies:Â https://kubernetes.io/docs/getting-started-guides/minikube/
0 notes
Text
Deploying a Stateful Application on Azure Kubernetes Service (AKS)
Once you go through this Kubernetes tutorial, youâll be able to follow the processes & ideas outlined here to deploy any stateful application on Azure Kubernetes Service (AKS).
In our previous post, we guided you through the process of deploying a stateful, Dockerized Node.js app on Google Cloud Kubernetes Engine! As an example application, we used our blog engine called Ghost. If you read that post, you'll see that the cluster creation, disk provisioning and the MySQL database creation and handling is vendor specific, which also leaks into our Kubernetes objects. So let's compare it to setting up an AKS cluster on Azure and deploying our Ghost there.
This article was written by Kristof Ivancza who is a software engineer at RisingStack & Tamas Kadlecsik, RisingStack's CEO. In case you need guidance with Kubernetes or Node.js, feel free to ping us at [email protected]
If you are not familiar with Kubernetes, I recommend reading our Getting started with Kubernetes article first.
What will we need to deploy a stateful app on Azure Kubernetes Engine?
Create a cluster
Persistent Disks to store our images and themes
Create a MySQL instance and connect to it
A secret to store credentials
A deployment
A service to expose the application
Creating the Cluster
First, we need to create a cluster, set the default cluster for AKS and pass cluster credentials to kubectl.
# create an Azure resource group $ az group create --name ghost-blog-resource --location eastus # locations: eastus, westeurope, centralus, canadacentral, canadaeast # ------ # create a cluster $ az aks create --resource-group ghost-blog-resource --name ghost-blog-cluster --node-count 1 --generate-ssh-keys # this process could take several minutes # it will return a JSON with information about the cluster # ------ # pass AKS Cluster credentials to kubectl $ az aks get-credentials --resource-group ghost-blog-resource --name ghost-blog-cluster # make sure it works $ kubectl get node
The Container and the Deployment
We'll use the same image as before, and the Deployment will be the same as well. I'll add it to this blogpost though, so you can see how it looks like.
# deployment.yml apiVersion: apps/v1beta1 kind: Deployment metadata: name: ghost-blog labels: app: ghost-blog spec: replicas: 1 selector: matchLabels: app: ghost-blog template: metadata: labels: app: ghost-blog spec: containers: # ghost container - name: ghost-container image: ghost:alpine imagePullPolicy: IfNotPresent # ghost always starts on this port port: - containerPort: 2368
Creating Persistent Disks to Store our Images and Themes
We'll create our disk using Dynamic Provisioning again. Although, in this case, we won't specify the storageClassName, as Kubernetes will use the default one when it's omitted. We could have done this on GKE as well, but I wanted to provide a more detailed picture of the disk creation. On GKE the default StorageClass was called standard, on AKS it is called default.
# PersistentVolumeClaim.yml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pd-blog-volume-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
Submit this yaml with the following command:
$ kubectl apply -f PersistentVolumeClaim.yml # make sure it is bound $ kubectl get pvc # it could take a few minutes to be bound, if its pending for more than a minute check `kubectl describe` to make sure nothing fishy happened $ kubectl describe pvc
The deployment should be updated as well, just as before:
# deployment.yml apiVersion: apps/v1beta1 kind: Deployment metadata: name: ghost-blog labels: app: ghost-blog spec: replicas: 1 selector: matchLabels: app: ghost-blog template: metadata: labels: app: ghost-blog spec: containers: # ghost container - name: ghost-container image: ghost:alpine imagePullPolicy: IfNotPresent # ghost always starts on this port ports: - containerPort: 2368 volumeMounts: # define persistent storage for themes and images - mountPath: /var/lib/ghost/content/ name: pd-blog-volume volumes: - name: pd-blog-volume persistentVolumeClaim: claimName: pd-blog-volume-claim
Creating a MySQL Instance and Connecting to it Using SSL
First we need to add the MySQL extension for Azure Databases.
$ az extension add --name rdbms
Now we're ready to create our MySQL server.
$ az mysql server create --resource-group ghost-blog-resource --name ghost-database --location eastus --admin-user admin --admin-password password --sku-name GP_Gen4_2 --version 5.7 # this could take several minutes to complete
Configuring the Firewall Rule
$ az mysql server firewall-rule create --resource-group ghost-blog-resource --server ghost-database --name allowedIPrange --start-ip-address 0.0.0.0 --end-ip-address 255.255.255.255
This rule will give access to the database from every IP. It is certainly not recommended to open everything. However, the Nodes in our cluster will have different IP addresses which are difficult the guess ahead of time. If we know that we will have a set number of Nodes, let's say 3, we can specify those IP addresses. However, if we plan to use Node autoscaling, we will need to allow connections from a wide range of IPs. You can use this as a quick and dirty solution, but it is definitely better to use a Vnet.
Configure Vnet service endpoints for Azure Database for MySQL
Virtual Network (VNet) service endpoint rules for MySQL is a firewall security feature. By using it, we can limit access to our Azure MySQL server, so it only accepts requests that are sent from a particular subnet in a virtual network. Via using VNet rules, we don't have to configure Firewall Rules and add each and every node's IP to grant access to our Kubernetes Cluster.
$ az extension add --name rdbms-vnet # make sure it got installed $ az extensions list | grep "rdbms-vnet" { "extensionType": "whl", "name": "rdbms-vnet", "version": "10.0.0" }
The upcoming steps will have to be done in the browser as there is no way to do it through the CLI. Or even if there is, it is definitely not documented, so it is a lot more straightforward to do it on the UI.
Go to Azure Portal and login to your account
In the search bar on the top search for Azure Database for MySQL servers.
Select the database you created (ghost-database).
On the left sidebar, click Connection Security
You will find VNET Rules in the middle. Click + Adding existing virtual network
Give it a name (e.g: myVNetSQLRule),
Select your subscription type
Under Virtual Network, select the created resource group and the subnet name / address prefix will autocomplete itself with the IP range.
Click Enable.
That's it. :)
Security on Azure Kubernetes Service (AKS)
Now that we're discussing security let's talk about SSL. By default its enforced, but you can disable/enable it with the following command (or disable it in Azure Portal under Connection Security):
$ az mysql server update --resource-group ghost-blog-resource --name ghost-database --ssl-enforcement Disabled/Enable
Download the cert file, we will use it later when we will create secrets. Also, you can verify the SSL connection via the MySQL client by using the cert file.
$ mysql -h ghost-database.mysql.database.azure.com -u admin@ghost-database -p --ssl-ca=BaltimoreCyberTrustRoot.crt.pem mysql> status # output should show: `SSL: Cipher in use is AES256-SHA`
Creating Secrets to Store Credentials
The secrets will store the sensitive data that we'll need to pass on to our pods. As secret objects can store binary data as well, we need to base64 encode anything we store in them.
$ echo -n "transport" | base64 $ echo -n "service" | base64 $ echo -n "user" | base64 $ echo -n "pass" | base64
The -n option is needed, so echo doesnât add a \n at the end of the echoed string. Provide the base64 values for transport, service, user, pass:
# mail-secrets.yml apiVersion: v1 kind: Secret metadata: name: mail-credentials type: Opaque data: transport: QSBsbGFtYS4gV2hhdCBlbHNl service: VGhlIFJveWFsIFBvc3QuIE5vbmUgZWxzZSB3b3VsZCBJIHRydXN0 user: SXQncy1hIG1lISBNYXJpbw== pass: WW91IHNoYWxsIG5vdA==
Create another secret file and provide your credentials for MySQL.
# db-secrets.yml apiVersion: v1 kind: Secret metadata: name: db-credentials type: Opaque data: user: SXQncy1hIG1lISBNYXJpbw== host: QSB2ZXJ5IGZyaWVuZGx5IG9uZSwgSSBtaWdodCBhZGQ= pass: R2FuZGFsZiEgSXMgdGhhdCB5b3UgYWdhaW4/ dbname: V2FuZGEsIGJ1dCBoZXIgZnJpZW5kcyBjYWxsIGhlciBFcmlj
Upload the secrets, so you can access them in your deployment.
$ kubectl create -f mail-secrets.yml db-secrets.yml
We need to create one more secret for the previously downloaded cert.
$ kubectl create secret generic ssl-cert --from-file=BaltimoreCyberTrustRoot.crt.pem
We will use these later in the deployment.
Creating the Deployment
Everything is set up, now we can create the deployment which will pull our app container and run it on Kubernetes.
# deployment.yml apiVersion: apps/v1beta1 kind: Deployment metadata: name: ghost-blog labels: app: ghost-blog spec: replicas: 1 selector: matchLabels: app: ghost-blog template: metadata: labels: app: ghost-blog spec: containers: # ghost container - name: ghost-container image: ghost:alpine # envs to run ghost in production env: - name: mail__transport valueFrom: secretKeyRef: name: mail-credentials key: transport - name: mail__options__service valueFrom: secretKeyRef: name: mail-credentials key: service - name: mail__options__auth__user valueFrom: secretKeyRef: name: mail-credentials key: user - name: mail__options__auth__pass valueFrom: secretKeyRef: name: mail-credentials key: pass - name: mail__options__port value: "2525" - name: database__client value: mysql - name: database__connection__user valueFrom: secretKeyRef: name: db-credentials key: user - name: database__connection__password valueFrom: secretKeyRef: name: db-credentials key: pass - name: database__connection__host valueFrom: secretKeyRef: name: db-credentials key: host - name: database__connection__ssl__rejectunauthorized value: "true" - name: database__connection__ssl valueFrom: secretKeyRef: name: ssl-cert key: BaltimoreCyberTrustRoot.crt.pem - name: database__connection__database valueFrom: secretKeyRef: name: db-credentials key: dbname - name: url value: "http://your_url.com" - name: NODE_ENV value: production imagePullPolicy: IfNotPresent # ghost always starts on this port ports: - containerPort: 2368 volumeMounts: # define persistent storage for themes and images - mountPath: /var/lib/ghost/content/ name: pd-blog-volume subPath: blog # resource ghost needs resources: requests: cpu: "130m" memory: "256Mi" limits: cpu: "140m" memory: "512Mi" volumes: - name: pd-blog-volume persistentVolumeClaim: claimName: pd-blog-volume-claim
Create the deployment with the following command:
$ kubectl apply -f deployment.yml # you can run commands with --watch flag, so you donât have to spam to see changes $ kubectl get pod -w # if any error occurs $ kubectl describe pod
Creating a Service to Expose our Blog
We can expose our application to the internet with the following command:
$ kubectl expose deployment ghost-blog --type="LoadBalancer" \ --name=ghost-blog-service --port=80 --target-port=2368
This will expose ghost deployment on port 80 as ghost-blog-service.
$ kubectl get service -w # run get service with --watch flag, so you will se when `ghost-service` get an `External-IP`
Creating a Service with Static IP
Now we want to point our DNS provider to our service, so we need a static IP.
# reserve a Static IP $ az network public-ip create --resource-group MC_ghost-blog-resource_ghost-blog-cluster_eastus --name staticIPforGhost --allocation-method static # get the reserved Static IP $ az network public-ip list --resource-group MC_ghost-blog-resource_ghost-blog-cluster_eastus --query [0].ipAddress --output tsv
And now letâs create the following service.yml file and replace loadBalancerIP with yours. With this now, you can always expose your application on the same IP address.
# service.yml apiVersion: v1 kind: Service metadata: name: ghost-blog-service labels: app: ghost spec: loadBalancerIP: 133.713.371.337 # your reserved IP type: LoadBalancer ports: - port: 80 # targetPort: 2368 # port where ghost run selector: app: ghost
It does the same as the kubectl expose command, but we have a reserved static IP.
Final Thoughts on Deploying on Azure Kubernetes Service (AKS)
As you can see, even though Kubernetes abstracts away cloud providers and gives you a unified interface when you interact with your application, you still need to do quite a lot vendor specific setup. Thus, if you are on the way of moving to the cloud, I highly suggest playing around with different providers so you can find the one the suits you the best. Some might be easier to set up for one use case, but another might be cheaper.
This article was written by Kristof Ivancza who is a software engineer at RisingStack & Tamas Kadlecsik, RisingStack's CEO. In case you need guidance with Kubernetes or Node.js, feel free to ping us at [email protected]
By running a blog, or something similar on some of the major platforms can help you figure out which one should you use for what, while the experimentation can also give you an idea about the actual costs you'll pay on the long run. I know most of them have price calculators, but when it's about running a whole cluster, you'll face quite a lot of charges that you did not anticipate, or at least did not expect to be that high.
Deploying a Stateful Application on Azure Kubernetes Service (AKS) published first on https://koresolpage.tumblr.com/
0 notes
Text
Deploying a Stateful Application on Google Cloud Kubernetes Engine
In this article, weâll guide you through the process of deploying a stateful, Dockerized Node.js app on Google Cloud Kubernetes Engine! As an example application, we will use Ghost - the open-source blogging platform we use to run the RisingStack blog and serve ~150K readers/month. The application will have persistent storage so it can persist its themes and images.
Takeaway: Once you go through this tutorial youâll be able to follow the processes & ideas outlined here to deploy any stateful application on Kubernetes!
If you are not familiar with Kubernetes on Google Cloud Kubernetes Engine or with setting up clusters, I recommend reading our How to Get Started With Kubernetes article first. It will give you the basics to get started.
This article was written by Kristof Ivancza who is a software engineer at RisingStack & Tamas Kadlecsik, RisingStack's CEO. In case you need guidance with Kubernetes or Node.js, feel free to ping us at [email protected]
What is Ghost?
Ghost is an open-source blogging platform powered by a non-profit organization called the Ghost Foundation, and its maintained by independent contributors. Ghost was written in Node.js on the server-side, Ember.js & handlebars on the client side. Check out their GitHub repository for more information.
What will we need to deploy a stateful app on Kubernetes properly?
Create a cluster
Persistent Disks to store our images and themes
Create a Second Generation MySQL instance and connect to it
A secret to store credentials
A deployment
A service to expose the application
Cluster creation
First, we need to create a cluster and set the default cluster for gcloud and pass cluster credentials to kubectl.
# create the cluster $ gcloud container clusters create [CLUSTER_NAME] # set the default cluster $ gcloud config set container/cluster [CLUSTER_NAME] # pass cluster credentials to kubectl $ gcloud container clusters get-credentials [CLUSTER_NAME]
Get the Cluster ID of the project and assign it to a variable named PROJECT_ID.
$ export PROJECT_ID="$(gcloud config get-value project -q)"
Getting started with the container
Here you can find the official Dockerfile for ghost and docker-entrypoint.sh script. To test it locally, you can run:
$ docker run --name test-ghost -p 8080:2368 ghost:alpine
Now you should be able to reach your local Ghost by opening http://localhost:8080 in your browser.
If we want to deploy this container on Kubernetes, weâll need to create a deployment.
# deployment.yml apiVersion: apps/v1beta1 kind: Deployment metadata: name: ghost-blog labels: app: ghost-blog spec: replicas: 1 selector: matchLabels: app: ghost-blog template: metadata: labels: app: ghost-blog spec: containers: # ghost container - name: ghost-container image: ghost:alpine imagePullPolicy: IfNotPresent # ghost always starts on this port ports: - containerPort: 2368
Weâre not production ready yet, so weâll keep updating the deployment as we go!
As a second step, letâs create and mount the disks weâll use to store our Ghost themes and blogpost images.
Creating persistent storages to store our themes and images
Kubernetes pods are stateless by default, meaning that it should be possible to kill and spin up new pods for a deployment on momentâs notice. As a result, each podâs file system is ephemeral, so whatever files were modified or created during the podâs lifetime will be gone once the pod is shut down.
However, Ghost stores the themes and images we upload in /var/lib/ghost/content/, thus we have to make sure they are persisted properly. To do so, we need to use a persistent storage and make our application stateful.
We have two ways of creating disks. We can create one manually on GCE and pass it on to Kubernetes, or just tell Kubernetes what we need and let it create the disk for us. The first method is called Static Provisioning and the second one is called - you guessed it - Dynamic Provisioning.
Static Provisioning is useful when you have an already existing disk from before, and you want your pods to use this disk. But if you donât have a disk yet, itâs easier to let Kubernetes create one for you, which means using Dynamic Provisioning.
Side note: it is also easier on our wallet to go with Dynamic Provisioning as on GCE the smallest disk we can create is a 100GB volume, but when we let Kubernetes provision the disk for us, we can request whatever size we need.
To understand the disk creation we need to take a look at Persistent Volume Claims, so letâs get to it straight away!
Persistent Volume Claim
Letâs update our deployment first, so it will wait for a mountable disk to be present.
# deployment.yml apiVersion: apps/v1beta1 kind: Deployment metadata: name: ghost-blog labels: app: ghost-blog spec: replicas: 1 selector: matchLabels: app: ghost-blog template: metadata: labels: app: ghost-blog spec: containers: # ghost container - name: ghost-container image: ghost:alpine imagePullPolicy: IfNotPresent # ghost always starts on this port ports: - containerPort: 2368 volumeMounts: # define persistent storage for themes and images - mountPath: /var/lib/ghost/content/ name: pd-blog-volume volumes: - name: pd-blog-volume persistentVolumeClaim: claimName: pd-blog-volume-claim
What changed is that we added the volumeMounts and volumes fields.
The volumeMounts belongs to the container. The mountPath defines where the volume will be mounted in the container. So itâs basically the same as if we ran our container with docker run -vpwd:/var/lib/ghost/content/ --name ghost-blog -p 8080:2368 ghost:alpine.
The volumes defines the PersistentVolumeClaim or pvc that will handle the attachment of the volume to the container. In our case it will look like this:
# PersistentVolumeClaim.yml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pd-blog-volume-claim spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: standard
As you can see, the name matches the one we referred to in the deployment. In the spec we define the accesModes to be ReadWriteMany, so the volume can be accessed by multiple Nodes and we can achieve high availability. The part where we request 10Gi of storage speaks for itself IMO, and for all our sakes and purposes itâs enough to know that the storageClassName: standard field will let kubernetes automatically provision an HDD for us.
To submit the pvc run the following command:
$ kubectl apply -f PersistentVolumeClaim.yml # to make sure everythind happend correctly $ kubectl get pvc # if something is not bound or need more information for debugging $ kubectl describe pvc
If everything went right, you should see after running $ kubectl get pvc that the persistent volume is created and bound to the volume claim.
Create and connect to MySQL using a Second Generation instance
We need to create a "Cloud SQL for MySQL Second Generation" instance.
By using a Second Generation instance, we can use a Cloud SQL Proxy sidecar in our deployment to communicate with the database. A sidecar is a second, helper container inside a deployment next to the application container that handles auxiliary tasks, such as encryption. (This also might shed some light on why the containers field is plural in the deployment.ymls and why it's an array.)
Setting up the instance and the sidecar will be a bit tricky, but at least this way we don't have to configure SSL connection, whitelist IP addresses or create a Static IP to connect to our CloudSQL instance, as the proxy handles all communication with the database.
Creating a Second Generation instance:
First we get machine types
$ gcloud sql tiers list TIER AVAILABLE_REGIONS RAM DISK D0 [long-ass region list] 128 MiB 250 GiB D1 [long-ass region list] 512 MiB 250 GiB D2 [long-ass region list] 1 GiB 250 GiB [...] db-f1-micro [even longer region list] 614.4 MiB 3.0 TiB db-g1-small [even longer region list] 1.7 GiB 3.0 TiB db-n1-standard-1 [even longer region list] 3.8 GiB 10.0 TiB [...] # to use a 2nd gen instance, you must choose from values that are starting with `db-`
Then we create the instance
$ gcloud sql instances create [INSTANCE_NAME] --tier=[TIER] --region=[REGION] # [INSTANCE_NAME] = this will be the name of the db # [TIER] = chosen machine tier from previous list # [REGION] = preferably your clusters region (e.g: us-central1)
Finally, we set root for MySQL
$ gcloud sql users set-password root % --instance [INSTANCE_NAME] --password [PASSWORD] # [ISNTANCE_NAME] = name of your previously created db # [PASSWORD] = the password you want for root
Connect to CloudSQL using a Proxy sidecar
First, we need to enable the Cloud SQL Admin API. You can do it here
Create a Service Account
Go to the Service Account Page
Select the needed Cloud SQL instance
Click Create Service Account
Select Cloud SQL > Cloud SQL Client from the role dropdown menu
Change the account ID to a value you will remember later, if needed
ClickFurnish a new Private Key
Click create
A JSON file with the private key will be downloaded to your machine. Keep it somewhere safe, as you will need it later. I will refer to this file later as [PATH_TO_DOWNLOADED_JSON_SECRET]
Create the proxy user: A MySQL user that the proxy sidecar will use when connecting to the database. To do so, use to following command:
$ gcloud sql users create proxyuser cloudsqlproxy~% --instance=[INSTANCE_NAME] --password=[PASSWORD] # Instance name = MySQL instance you want to connect to (e.g: ghost-sql) # The username of the proxyuser will be "proxyuser" with the password you pass as argument to the command
Get your instance connection name
$ gcloud sql instances describe [INSTANCE_NAME] $ gcloud sql instances describe ghost-sql | grep 'connectionName' connectionName: ghost-blog:us-central1:ghost-sql
Create the secrets that weâll use in the deployment:
2 secrets are required to access data in Cloud SQL from your application cloudsql-instance-credentials Secret contains the service account. (JSON file you get in step 2.7) The cloudsql-db-credentials Secret contains the proxy's user account and password.
To create cloudsql-instance-credentials run:
$ kubectl create secret generic cloudsql-instance-credentials --from-file=credentials.json=[PATH_TO_DOWNLOADED_JSON_SECRET] # [PATH_TO_DOWNLOADED_JSON_SECRET] = JSON file you downloaded when created the service account
To create cloudsql-db-credentials run:
$ kubectl create secret generic cloudsql-db-credentials --from-literal=username=proxyuser --from-literal=password=[PASSWORD] # username=proxyuser - created username for CloudSQL in the 3rd step # password=[PASSWORD] - password for proxyuser we set in the 3rd step
Add the proxy container to the deployment:
Replace [INSTANCE_CONNECTION_NAME], with the value you got in the 4th step.
# deployment.yml [...] spec: containers: # ghost container - name: ghost-container image: ghost:alpine imagePullPolicy: IfNotPresent # ghost always starts on this port ports: - containerPort: 2368 volumeMounts: # define persistent storage for themes and images - mountPath: /var/lib/ghost/content/ name: pd-blog-volume # cloudsql proxy container - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=[INSTANCE_CONNECTION_NAME]=tcp:3306", "-credential_file=/secrets/cloudsql/credentials.json"] volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true volumes: - name: pd-blog-volume persistentVolumeClaim: claimName: pd-blog-volume-claim
Pass the Cloud SQL credentials to the ghost container.
# deployment.yml [...] spec: template: spec: containers: # ghost container - name: ghost-container image: ghost:alpine imagePullPolicy: IfNotPresent # ghost always starts on this port ports: - containerPort: 2368 volumeMounts: # define persistent storage for themes and images - mountPath: /var/lib/ghost/content/ name: pd-blog-volume # Env vars to be passed to the container env: - name: database__connection__host value: "127.0.0.1" - name: database__connection__user valueFrom: secretKeyRef: name: cloudsql-db-credentials key: username - name: database__connection__password valueFrom: secretKeyRef: name: cloudsql-db-credentials key: password # cloudsql proxy container - name: cloudsql-proxy [...] volumes: - name: pd-blog-volume persistentVolumeClaim: claimName: pd-blog-volume-claim # db credentials stored in this volume to access our mysql - name: cloudsql-instance-credentials secret: secretName: cloudsql-instance-credentials
database__connection__host is 127.0.0.1 as containers in the same pod can access each other on localhost.
The secret named cloudsql-db-credentials stores the created username & password for the proxy.
We also added a new volume to volumes at the bottom of the yml. As you can see it is not an actual disk, but the secret we created before. This is the secret that stores the data from the JSON file we got when we created in step 2.7.
Set up the mail server connection
In our example, we will use Sendgrid to send emails. As before, we'll create a secret to pass on the values to the deployment.
In the previous section we used the following command to create a secret:
$ kubectl create secret generic cloudsql-db-credentials --from-literal=username=proxyuser --from-literal=password=[PASSWORD]
We can do the same here as well:
$ kubectl create secret generic mail-secrets --from-literal=mailuser=[SENDGRID_USERNAME] --from-literal=mailpass=[SENDGRID_PASSWORD]
If you run kubectl get secret mail-secrets -o yaml you'll get
$ kubectl get secret mail-secrets -o yaml apiVersion: v1 data: mailpass: V2hhdCB3ZXJlIHlvdSBob3BpbmcgeW91J2QgZmluZCBoZXJlPyA7KQo= mailuser: WW91J3JlIGEgdGVuYWNpb3VzIGxpdGxlIGZlbGxhLCBhcmVuJ3QgeW91PyA6KQo= kind: Secret metadata: creationTimestamp: 2018-03-13T15:48:39Z name: sendgrid-secrets namespace: default resourceVersion: "2517884" selfLink: /api/v1/namespaces/default/secrets/sendgrid-secrets uid: ffec2546-26d5-11e8-adfc-42010a800106 type: Opaque
As you can see the main information is in data. The values we passed to the command are base64 encoded and stored there.
If you prefer to create a yaml file for the secret as well, you can strip this one from the auto generated metadata, so it looks something like this:
apiVersion: v1 data: mailpass: V2hhdCB3ZXJlIHlvdSBob3BpbmcgeW91J2QgZmluZCBoZXJlPyA7KQo= mailuser: WW91J3JlIGEgdGVuYWNpb3VzIGxpdGxlIGZlbGxhLCBhcmVuJ3QgeW91PyA6KQo= kind: Secret type: Opaque
and upload it with
$ kubectl create -f mail-secrets.yml
Now we also need to pass these as env vars to the app container:
[...] spec: template: spec: containers: # ghost container - name: ghost-container [...] env: - name: mail__transport value: SMTP - name: mail__options__service value: Sendgrid # use mail envvars from the mail-secrets file - name: mail__options__auth__user valueFrom: secretKeyRef: name: mail-secrets key: mailuser - name: mail__options__auth__pass valueFrom: secretKeyRef: name: mail-secrets key: mailpass # end of mailenvs - name: mail__options__port value: "2525" - name: database__client value: mysql # CloudSQL credentials to connect with the Proxyuser - name: database__connection__host value: "127.0.0.1" - name: database__connection__user valueFrom: secretKeyRef: name: cloudsql-db-credentials key: username - name: database__connection__password valueFrom: secretKeyRef: name: cloudsql-db-credentials key: password # cloudsql proxy container - name: cloudsql-proxy [...]
Creating the deployment
By now we have all the objects our deployment needs to run. There is still some additional setup left, but let's see the whole package:
# deployment.yml apiVersion: apps/v1beta1 kind: Deployment metadata: name: ghost-blog labels: app: ghost-blog spec: replicas: 1 selector: matchLabels: app: ghost-blog template: metadata: labels: app: ghost-blog spec: containers: # ghost container - name: ghost-container image: ghost:alpine # envs to run ghost in production env: - name: mail__transport value: SMTP - name: mail__options__service value: Sendgrid # use mail envvars from the mail-secrets file - name: mail__options__auth__user valueFrom: secretKeyRef: name: mail-secrets key: mailuser - name: mail__options__auth__pass valueFrom: secretKeyRef: name: mail-secrets key: mailpass # end of mailenvs - name: mail__options__port value: "2525" - name: database__client value: mysql # CloudSQL credentials to connect with the Proxyuser - name: database__connection__user # referencing to secret file valueFrom: secretKeyRef: name: cloudsql-db-credentials key: username - name: database__connection__password valueFrom: secretKeyRef: name: cloudsql-db-credentials key: password # end of Cloud SQL secrets - name: database__connection__host value: "127.0.0.1" # also recommended to put the database name inside a secret file - name: database__connection__database value: database_name - name: url value: "http://your_url.com" - name: NODE_ENV value: production # end of envs imagePullPolicy: IfNotPresent # ghost always starts on this port ports: - containerPort: 2368 volumeMounts: # define persistent storage for themes and images - mountPath: /var/lib/ghost/content/ name: pd-blog-volume subPath: blog # resource ghost needs resources: requests: cpu: "130m" memory: "256Mi" limits: cpu: "140m" memory: "512Mi" # cloudsql proxy container - name: cloudsql-proxy image: gcr.io/cloudsql-docker/gce-proxy:1.11 command: ["/cloud_sql_proxy", "-instances=[INSTANCE_NAME]=tcp:3306", "-credential_file=/secrets/cloudsql/credentials.json"] volumeMounts: - name: cloudsql-instance-credentials mountPath: /secrets/cloudsql readOnly: true # resource cloudsql needs resources: requests: cpu: "15m" memory: "64Mi" limits: cpu: "20m" memory: "128Mi" volumes: # db credentials stored in this volume to access our mysql - name: cloudsql-instance-credentials secret: secretName: cloudsql-instance-credentials - name: cloudsql emptyDir: # persistent storage used to store our themes and images # please note that we are using the predefined volume claim - name: pd-blog-volume persistentVolumeClaim: claimName: pd-blog-volume-claim
There are still some fields that might need some explanation.
In the root, you can see replicas: 1. This tells Kubernetes that we want exactly one pod to be spawned by the deployment. If you want to achieve high availability you should set this value to at least 3. You could also set up pod autoscaling if you want to make sure that your pods are scaled up horizontally when the load is higher and scaled back after the peek is over.
You can also find selector fields at three different places. The first one in metadata.labels is the label of the deployment. So when you run kubectl get deployment app=ghosts-blog you'll get all the deployments that has this label present. In selector.matchLabels we define which pods should the deployment handle. This also means that you could manually create pods and the deployment will handle them.
But as you saw we didnât create pods manually. We used the spec.template field instead, which creates a pod template that the deployment will use when it spawns new pods. That is why you see the strange path before the container specification that is spec.template.spec.containers. The first spec is the specification of the deployment which has a pod template, and this pod template spawns pods based on its own spec. And thatâs also the reason why we have another set of labels in the template. These are the labels the csreated pods will have, and this way the deployment can match the pods it needs to handle once they are spawned.
We also added the resources field with CPU and memory requests and limits. If you omit this, the first created pod will eat up all the resources of its host node and all other pods will be stuck in pending status. One thing to note though is that there is quite a small difference between the CPU request and limit. The reason for this is to be ready for autoscaling. If there is a big difference between the two it might happen that your node will be filled with a lot of pods which uses just a small amount of CPU. When the need comes to scale them vertically though there is no available processor time is left, so you are stuck with pods that cannot serve their purpose fast enough and cannot be scaled up. To prevent this, have a small difference between the requested CPU and its limit.
It is also worth mentioning that Google Compute Engine blocks outbound connection ports 25, 465, 587. All the popular third-party mail providers such as MailGun, MailJet or SendGrid uses one of these ports by default in Ghost. Thatâs why we have overwritten the default mail port to 2525 with the mail__options__port env var.
Now we are ready to apply the deployment:
$ kubectl apply -f deployment.yml # get pods with watch flag, so the output is constantly updated when changes happen $ kubectl get pods -w # to get further info when a problem has occurred $ kubectl describe pods
With the following command, you can also run a particular image and create a deployment, which can come handy while you are testing if your setup is correct. (Note that this is the way you manually start a pod without a deployment.)
$ kubectl run ghost-blog --replicas=1 --image=ghost:alpine --port=80
And here are some more handy kubectl commands you can use while debugging:
# copy from your computer to pod - use for testing only! $ kubectl cp SOURCE default/_podname_:/DESTINATION -c container_name # view logs $ kubectl logs _podname_ # if multiple containers are in the pod $ kubectl logs _podname_ --container container_name # get a shell to a running container $ kubectl exec -it _podname_ -- sh
Creating a service to expose our application
All thatâs left is to expose our application, so it can receive external traffic.
You can let Kubernetes get a static IP for you to expose your blog to the public internet, but then you have to reconfigure your DNS provider each time you recreate the service. It is better to provision one manually first and then pass it on to the service.
# create a Static IP address named ghost-blog-static-ip $ gcloud compute addresses create ghost-blog-static-ip --region us-central1 # get the Static IP created with the previous command $ gcloud compute addresses describe ghost-blog-static-ip --region us-central1 | grep 'address'
And now create the following service.yml file and replace loadBalancerIP with yours.
# service.yml apiVersion: v1 kind: Service metadata: name: blog-ghost-service labels: app: blog-ghost spec: selector: app: blog-ghost ports: - port: 80 targetPort: 2368 # exposed port of the ghost container type: LoadBalancer loadBalancerIP: [IP_ADDRESS]
This creates a service named blog-ghost-service, it finds any podâs container port that has the label app: blog-ghost and exposes itâs port 2368 on port 80 to the public internet while balancing the load between them.
$ kubectl apply -f service.yml # watch the get service command $ kubectl get service -w # usually it takes about a minute to get the External IP # if it's still stuck in <pending> status run the following $ kubectl describe service
If you prefer one liners, you can achieve the same result by running the following command:
$ kubectl expose deployment ghost-blog --type="LoadBalancer" \ --name=ghost-blog-service --port=80 --target-port=2368
This will expose your previously created ghost-blog deployment on port 80 with the service name ghost-blog-service without the need to create the yaml file yourself.
Final thoughts on deploying to Kubernetes
I know, this whole thing might look daunting, especially if you have already deployed stateless apps to Kubernetes. However, if you take into account the fact that when you deploy a Ghost blog, or any other stateful application to simple VMs without containers or container orchestration you would need to go through the same steps, but manually. You need to create disks and attach them by hand, create a database instance and set up the connection. And you also need to store your credentials safely and set up your firewall rules. The majority of complexity here comes from the fact that managing stateful apps is complex on its own right. Kubernetes makes it easier by handling the creation and attachment of disks to our service instances and helps to keep things organized when the app needs to be horizontally scaled.
This article was written by Kristof Ivancza who is a software engineer at RisingStack & Tamas Kadlecsik, RisingStack's CEO. In case you need guidance with Kubernetes or Node.js, feel free to ping us at [email protected]
The only part that is a bit more tedious than would be otherwise is the Cloud SQL Proxy we needed to set up, but this was necessary because of Google Cloud, not Kubernetes. Add the fact here that by leveraging container technologies, we get a ready-made proxy we can utilize which takes away a lot of manual setup that weâd need to handle otherwise.
Now that we have deployed one stateful app, we are ready to package all our blogs in a similar way and set them up in a similar cluster, or even in the same one, if we want to reduce our costs. This way we are given a unified environment that we can interact with for each of our assets if needed. Even though Kubernetes is mostly used for distributed applications, now weâve shown that it can be used for the deployment of several standalone apps easier than would be otherwise.
Happy infrastructuring!
Deploying a Stateful Application on Google Cloud Kubernetes Engine published first on https://medium.com/@koresol
0 notes
Text
Connecting to Mongo with a self signed CA on a JVM in Kubernetes
At $WORK, we're creating an internal platform on top of Kubernetes for developers to deploy their apps. Our Ops people have graciously provided us with Mongo clusters that all use certificates signed by a self-signed certificate authority. So, all our clients need to know about the self-signed CA in order to connect to Mongo. For Node or Python, it's possible to pass the self-signed CA file in the code running in the application.
But, things are a little more complicated for Java or Scala apps, because configuration of certificate authorities is done at the JVM level, not at the code level. And for an extra level of fun, we want to do it in Kubernetes, transparently to our developers, so they don't have to worry about it on their own.
err, wha? telling the JVM about our CA
First off, we had to figure out how to tell the JVM to use our CA. And luckily since all the JVM languages use the same JVM, it's the same steps for Scala, or Clojure, or whatever other JVM language you prefer. The native MongoDB Java driver docs tell us exactly what we need to do: use keytool to import the cert into a keystore that the JVM wants, and then use system properties to tell the JVM to use that keystore. The keytool command in the docs is:
$ keytool -importcert -trustcacerts -file <path to certificate authority file> \ -keystore <path to trust store> -storepass <password>
The path to the existing keystore that the JVM uses by default is $JAVA_HOME/jre/lib/security/cacerts, and its default password is changeit. So if you wanted to add your self signed CA to the existing keystore, it'd be something like
$ keytool -importcert -trustcacerts -file ssca.cer \ -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass changeit
(Even this very first step had complications. Our self signed CA was a Version 1 cert with v3 extensions, and while no other language cared, keytool refused to create a keystore with it. We ended up having to create a new self-signed CA with the appropriate version. Some lucky googling led us to that conclusion, but of particular use was using openssl to examine the CA and check its versions and extensions:)
$ openssl x509 -in ssca.cer -text -noout // Certificate: // Data: // Version: 3 (0x2) // Serial Number: ... // ... // X509v3 extensions: // X509v3 Subject Key Identifier: ... // X509v3 Key Usage: ... // X509v3 Basic Constraints: ...
Another useful command was examining the keystore before and after we imported our self signed CA:
$ keytool -list -keystore /path/to/keystore/file
as you can look for your self-signed CA in there to see if you ran the command correctly.
Anyway, once you've created a keystore for the JVM, the next step is to set the appropriate system properties, again as out lined in the docs:
$ java \ -Djavax.net.ssl.trustStore=/path/to/cacerts \ -Djavax.net.ssl.trustStorePassword=changeit \ -jar whatever.jar
Since the default password is changeit, you may want to change it... but if you don't change it, you wouldn't have to specify the trustStorePassword system property.
handling this in kubernetes
The above steps aren't too complicated on their own. We just need to make sure we add our CA to the existing ones, and point the JVM towards our new and improved file. But, since we'll eventually need to rotate the self-signed CA, we can't just run keytool once and copy it everywhere. So, an initContainer it is! keytool is a java utility, and it's handily available on the openjdk:8u121-alpine image, which means we can make a initContainer that runs keytool for us dynamically, as part of our Deployment.
Since seeing the entire manifest at once doesn't necessarily make it easy to see what's going on, I'm going to show the key bits piece by piece. All of the following chunks of yaml belong to in the spec.template.spec object of a Deployment or Statefulset.
spec: template: spec: volumes: - name: truststore emptyDir: {} - name: self-signed-ca secret: secretName: self-signed-ca
So, first things first, volumes: an empty volume called truststore which we'll put our new and improved keystore-with-our-ssca. Also, we'll need a volume for the self-signed CA itself. Our Ops provided it for us in a secret with a key ca.crt, but you can get it into your containers any way you want.
$ kubectl get secret self-signed-ca -o yaml --export apiVersion: v1 data: ca.crt: ... kind: Secret metadata: name: self-signed-ca type: Opaque
With the volumes in place, we need to set up init containers to do our keytool work. I assume (not actually sure) that we need to add our self-signed CA to the existing CAs, so we use one initContainer to copy the existing default cacerts file into our truststore volume, and another initContainer to run the keytool command. It's totally fine to combine these into one container, but I didn't feel like making a custom docker image with a shell script or having a super long command line. So:
spec: template: spec: initContainers: - name: copy image: openjdk:8u121-alpine command: [ cp, /usr/lib/jvm/java-1.8-openjdk/jre/lib/security/cacerts, /ssca/truststore/cacerts ] volumeMounts: - name: truststore mountPath: /ssca/truststore - name: import image: openjdk:8u121-alpine command: [ keytool, -importcert, -v, -noprompt, -trustcacerts, -file, /ssca/ca/ca.crt, -keystore, /ssca/truststore/cacerts, -storepass, changeit ] volumeMounts: - name: truststore mountPath: /ssca/truststore - name: self-signed-ca mountPath: /ssca/ca
Mount the truststore volume in the copy initContainer, grab the file cacerts file, and put it in our truststore volume. Note that while we'd like to use $JAVA_HOME in the copy initContainer, I couldn't figure out how to use environment variables in the command. Also, since we're using a tagged docker image, there is a pretty good guarantee that the filepath shouldn't change underneath us, even though it's hardcoded.
Next, the import step! We need to mount the self-signed CA into this container as well. Run the keytool command as described above, referencing our copied cacerts file in our truststore volume and passing in our ssCA.
Two things to note here: the -noprompt argument to keytool is mandatory, or else keytool will prompt for interaction, but of course the initContainer isn't running in a shell for someone to hit yes in. Also, the mountPaths for these volumes should be separate folders! I know Kubernetes is happy to overwrite existing directories when a volume mountPath clashes with a directory on the image, and since we have different data in our volumes, they can't be in the same directory. (...probably, I didn't actually check)
The final step is telling the JVM where our new and improved trust store is. My first idea was just to add args to the manifest and set the system property in there, but if the Dockerfile ENTRYPOINT is something like
java -jar whatever.jar
then we'd get a command like
java -jar whatever.jar -Djavax.net.ssl.trustStore=...
which would pass the option to the jar instead of setting a system property. Plus, that wouldn't work at all if the ENTRYPOINT was a shell script or something that wasn't expecting arguments.
After some searching, StackOverflow taught us about the JAVA_OPTS and JAVA_TOOL_OPTIONS environment variables. We can append our trustStore to the existing value of these env vars, and we'd be good to go!
spec: template: spec: containers: - image: your-app-image env: # make sure not to overwrite this when composing the yaml - name: JAVA_OPTS value: -Djavax.net.ssl.trustStore=/ssca/truststore/cacerts volumeMounts: - name: truststore mountPath: /ssca/truststore
In our app that we use to construct the manifests, we check if the developer is already trying to set JAVA_OPTS to something, and make sure that we append to the existing value instead of overwriting it.
a conclusion of sorts
Uh, so that got kind of long, but the overall idea is more or less straightforward. Add our self-signed CA to the existing cacerts file, and tell the JVM to use it as the truststore. (Note that it's the trustStore option you want, not the keyStore!). The entire Deployment manifest all together is also available, if that sounds useful...
0 notes
Text
Kubernetes Tutorial: Using Secrets in Your Application
http://i.securitythinkingcap.com/PzYCvH #DevOps
0 notes