#via:IFTTT
Explore tagged Tumblr posts
cloudnative · 7 years ago
Link
“In practice and actual fact, what really matters for older Kubernetes version support is the continued availability and exercising of its end-to-end testing pipeline. If the machinery to quickly update an old release continues to exist, and exist in a state of good (non-flakey) repair, cutting a patch release is just a matter of someone – you, your provider or your vendor – having the engineering gumption to push it through. If a critical security fix isn’t back-ported to an older Kubernetes version, that’s a strong sign that no reasonably professional team is using that version in production anymore.”
1 note · View note
bsbllbsbll · 10 years ago
Link
An American weapon kills tons of foreign cities?
Yup, that's how you achieve international piece, Ozy.
0 notes
cloudnative · 7 years ago
Link
‘Importantly, the OpenShift platform cloud software, which included Red Hat’s own implementation of the Kubernetes container controller, will be deployable on either the full-on Red Hat Enterprise Linux in pets mode or the minimalist Red Hat CoreOS in cattle mode. But it will be using the Tectonic version of the Kubernetes controller going forward as well as integrating the Prometheus monitoring tool and etcd for storing telemetry. Gracely tells The Next Platform that the implementation of Kubernetes had outside dependencies such as the CloudForms hybrid cloud management tool (formerly ManageIQ) and was not “native” to Kubernetes in the same way that Tectonic is, meaning free of outside dependenies.’
1 note · View note
cloudnative · 7 years ago
Link
"It’s a waiting game for a comprehensive management platform."
1 note · View note
cloudnative · 7 years ago
Link
It happens to be the case that CF — because it’s an app platform and wants to let the user focus on their code — provides a way to convert code in to containers inside the platform without having to start messing around with Dockerfiles and the like. And this functionality even does some cool things for you like keeping your container OS automatically patched so you don’t have to build CI pipelines to monitor your base images and rebuild stuff.
0 notes
cloudnative · 8 years ago
Link
0 notes
cloudnative · 6 years ago
Link
There’s a cost to all this choice: it delays decision making, causes distress, and it leads to post-decision regret.
0 notes
cloudnative · 6 years ago
Link
> Thus we've seen a bunch of Kubernetes services spring up, run by the same people who brought you all the other Infrastructures-as-a-Service. Google (from whence Kubernetes emerged originally) has Google Kubernetes Engine (GKE), Amazon has Elastic Container Service for Kubernetes (EKS), Microsoft has Azure Container Service (AKS), VMware has VMware Kubernetes Engine (VKE), you get the idea. Pivotal has Pivotal Container Service (PKS) that can run on AWS, Google Cloud Platform, VMware, and (as of its recent PKS 1.3 launch) also Azure.
0 notes
cloudnative · 6 years ago
Link
Checklists of things to check before deploying.
0 notes
cloudnative · 6 years ago
Link
The list of pains: One cluster is not enough Developers want clusters close to them, for low latency Data storage Day two operations - upgrading, scaling, capacity management Managing heterogeneous infrastructure underneath the platform Backup & restore, disaster recovery
0 notes
cloudnative · 6 years ago
Link
Microservices for .Net.
0 notes
cloudnative · 6 years ago
Link
> As a scheduler of containers, Kubernetes does a pretty good job. If you keep it focused on that key task, it can take you miles. As a manager of a large scale distributed infrastructure, it’s not so good.
0 notes
cloudnative · 6 years ago
Link
It either means not having to worry about managing your middleware stack and/or a trigger-driven event system.
0 notes
cloudnative · 6 years ago
Link
Multi-tenancy ain’t easy: > The Kubernetes cluster itself becomes the line of “Hard Tenanacy”. This leads to the emerging pattern of “many clusters” rather than “one big shared” cluster. Its not uncommon to see customers of Google’s GKE Service have dozens of Kubernetes clusters deployed for multiple teams. Often each developer gets their own cluster. This kind of behavior leads to a shocking amount of Kubesprawl.
0 notes
cloudnative · 6 years ago
Link
0 notes
cloudnative · 6 years ago
Link
0 notes