#kubernetes controller golang
Explore tagged Tumblr posts
Text
#kubernetes controller manager#kubernetes controller golang#kubernetes controller explained#kubernetes controller example#kubernetes controller development
0 notes
Text
Unbridle the Power of Golang Cloud Development With its Awesomeness
In the ever-evolving landscape of cloud development, choosing the right programming language is paramount. Golang cloud development was known for the same as Go is statically typed language that aims to be quick, easy, and efficient. We'll look at Golang's growing popularity in cloud development in this blog article and see how it can revolutionise the process of building scalable, reliable, and high-performing cloud apps.
Efficient and Simple
Go was developed at Google with an emphasis on effectiveness and simplicity. Because of its clear and simple syntax, developers may convey concepts succinctly. This ease of use speeds up development and simplifies code maintenance.
Go's efficiency shines in Golang cloud development fast-paced realm, where agility and rapid iteration are essential. Instead of wasting time on intricate syntax, hire Golang developers who focus more on creating features and fixing issues.
Scalability and Concurrency
Go's strong concurrency support is one of its most notable characteristics. Writing concurrent programs is made simple with goroutines, lightweight threads controlled by the Go runtime. This is especially helpful for Golang cloud development applications where managing several activities at once is essential.
Go's concurrency paradigm and integrated channel support let programmers design highly concurrent and scalable applications. Cloud apps, which frequently need to use resources efficiently, can take advantage of Go's ease of handling several concurrent tasks.
Performance With Packs a Punch
Go performs admirably in terms of performance. Because it is built, it produces fast-executing binaries, which makes it a great option for serverless architectures and cloud-based microservices.
Golang Cloud development�� apps perform very well under heavy workloads because of their garbage collection technique and runtime economy. This performance increase is a huge benefit in the cutthroat world of cloud development, where speed and responsiveness are essential.
Go's Standard Library and Ecosystem
The standard library of Go is an invaluable resource for developers engaged in cloud-based applications. It offers complete support for developing distributed and networked systems, encompassing JSON handling, cryptographic operations, and HTTP servers. This helps to streamline the development process by getting rid of the requirement for third-party dependencies.
Furthermore, the Go ecosystem is expanding quickly, boasting an abundance of open-source frameworks and tools that streamline routine cloud development activities. Go's ecosystem provides support for handling authentication, connecting with cloud APIs, and maintaining databases.
Docker and Kubernetes Integration
Go plays a pivotal role in the containerization revolution, as both Docker and Kubernetes are written in Go. Golang is an obvious choice for developers working in containerized systems because of its tight interaction. Go and containerization technologies work together to provide smooth interoperability and best-in-class performance.
Cross-Platform Capabilities
Go's design includes a focus on cross-platform development, allowing developers to build applications that can run on various operating systems without modification. This is especially helpful for cloud development, as portability and flexibility are crucial.
The adaptability of cloud apps can be increased by developers by writing Go code on one platform and deploying it easily across several cloud environments. This cross-platform capability fits perfectly with the multi-cloud policies that many organisations have implemented, allowing them to select the best cloud services for their particular requirements and prevent vendor lock-in.
Conclusion:
Go is a powerful tool in the dynamic field of cloud development, where performance, scalability, and agility are critical factors. Its cross-platform features, ease of use, performance, broad standard library, compatibility with Docker and Kubernetes, and support for concurrency make it an appealing option for cloud developers.
0 notes
Text
Full-stack engineer ( JavaScript/Python) - Remote(Ukraine)
Company: Sourceter This position offers the flexibility to work part-time (20-30 hours per week) up to full-time (40 hours per week). As a Full-stack Engineer, you will be involved in several short-term projects, which will be executed consecutively. You will play a crucial role in designing, developing, and maintaining both front-end and back-end components of web applications. This is an exciting opportunity to contribute to the entire software development lifecycle and work with cutting-edge technologies. Responsibilities: - Collaborate with project stakeholders and other team-members to analyze project requirements and provide effort estimations. - Develop and maintain responsive and user-friendly web applications with clean and efficient code. - Implement front-end components using HTML, CSS, and JavaScript, with expertise in frameworks like Angular. - Design and build back-end systems using either Python or Golang, ensuring scalability, security, and performance. - Utilize cloud services to deploy and manage web applications. - Perform code reviews, debugging, and troubleshooting to ensure high-quality deliverables. - Stay updated with the latest industry trends and technologies, recommending improvements to enhance development processes. Requirements: - At least 3 years of experience working as a Full-stack engineer - Strong proficiency in front-end development using HTML, CSS, and JavaScript. - Experience with Angular framework. - Solid understanding of either Python or Golang for back-end development. - Experience working with SQL and NoSQL databases for data storage and retrieval. - Experience working with cloud platforms like Amazon AWS, MS Azure or GCP for deployment and infrastructure management. - Knowledge of version control systems (e.g., Git) and agile development methodologies. - Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. - Strong communication skills to effectively interact with team members and stakeholders. - Nice to have: - Experience with Vue.JS or Next.JS for Front-end development. - Familiarity with PostgreSQL or other SQL databases. - Knowledge of Node.JS for back-end development. - Understanding of containerization technologies like Docker and container orchestration tools like Kubernetes. - Exposure to DevOps principles and continuous integration/continuous deployment (CI/CD) pipelines. What we Offer: - Competitive market salary - Flexible working hours - Team of professionals who knows how to build world-class products - Wide range of cool opportunities for professional and personal growth Interview stages: - HR interview - Technical interview - Management interview APPLY ON THE COMPANY WEBSITE To get free remote job alerts, please join our telegram channel “Global Job Alerts” or follow us on Twitter for latest job updates. Disclaimer: - This job opening is available on the respective company website as of 8thJuly 2023. The job openings may get expired by the time you check the post. - Candidates are requested to study and verify all the job details before applying and contact the respective company representative in case they have any queries. - The owner of this site has provided all the available information regarding the location of the job i.e. work from anywhere, work from home, fully remote, remote, etc. However, if you would like to have any clarification regarding the location of the job or have any further queries or doubts; please contact the respective company representative. Viewers are advised to do full requisite enquiries regarding job location before applying for each job. - Authentic companies never ask for payments for any job-related processes. Please carry out financial transactions (if any) at your own risk. - All the information and logos are taken from the respective company website. Read the full article
0 notes
Text
Does ARM have a leg with HPE CSI Driver?
With rising popularity in the ARM architecture with the Macbooks, Raspberry Pis and the HPE ProLiant RL300 Gen11 server it’s evident that the industry is shifting towards a more dense and efficient computing. Do more with less. The concept that never fails. Ever since I got my RPi400 I’ve been interested in figuring out if I could:
Run Kubernetes on it?
Provide persistent storage to Kubernetes with the HPE CSI Driver?
Well, it turns out, both of those are true but a lot of water has flown under the bridges since then and the RPi400 is out of the picture. Instead K3s on Ubuntu in VMware Fusion on Apple Silicon has moved in.
Be warned that there hasn’t been any real testing done besides “does the lights come on”. If something breaks you get to keep both pieces.
The test bed
The move from Intel to Apple Silicon inadvertently denied me a desktop Kubernetes cluster where I could have a VM with a single node using the CSI driver to conduct ad-hoc experiments with persistent storage.
I’m running the tech preview of VMware Fusion for Apple Silicon and it’s a bit finicky to get Ubuntu 20.04 stood up but this thread explains how.
neofetch --stdout mmattsson@troy -------------- OS: Ubuntu 20.04.4 LTS aarch64 Host: VBSA 1 Kernel: 5.14.0-051400-generic Uptime: 18 mins Packages: 1725 (dpkg), 4 (snap) Shell: bash 5.0.17 Resolution: 1024x768 Terminal: /dev/pts/0 CPU: (4) GPU: 00:0f.0 VMware Device 0406 Memory: 1287MiB / 5918MiB
The K3s install process is stock:
curl -sfL https://get.k3s.io | sh -
I had to reboot the VM to shake life into IP forwarding, YMMV.
Bits and pieces
The HPE CSI Driver is only partially open source. Hence none of the feature enhancing sidecars are included in this setup. I’ve provided the images in my personal Quay repos but if you want to build the images yourself, it’s straight-forward. There’s a pending pull request to the csi-driver GitHub repo that hasn’t been submitted yet, so the images need to be built from my personal GitHub repo (these steps assume Docker and a working golang build environment including make).
git clone https://github.com/datamattsson/csi-driver cd csi-driver git checkout arm64 make ARCH=arm64 compile make image
Tag hpestorage/csi-driver:edge and push to your container registry.
I also crafted a manifest that pull my pre-built images and the sidecars removed. Since this is during the K3s 1.23 era, that’s what I ran with. The HPE container storage providers are also proprietary and I could not get the version of gradle working required to build the Nimble CSP so this end-to-end solution has been tailored towards the TrueNAS CSP.
# HPE CSI Driver defaults from edge kubectl create ns hpe-storage kubectl apply -f https://raw.githubusercontent.com/hpe-storage/co-deployments/master/yaml/csi-driver/edge/hpe-linux-config.yaml # Modify the image tags if you built the csi-driver image yourself kubectl apply -f https://raw.githubusercontent.com/hpe-storage/truenas-csp/master/K8s/v2.1.0-arm64/hpe-csi-k8s-1.23-arm64.yaml # This manifest uses the edge tag built from v2.1.0 sources kubectl apply -f https://raw.githubusercontent.com/hpe-storage/truenas-csp/master/K8s/v2.1.0-arm64/truenas-csp.yaml
At this point, all the components should be up and running within a minute.
$ kubectl get all -n hpe-storage NAME READY STATUS RESTARTS AGE pod/hpe-csi-controller-8585d49b96-wl9mz 5/5 Running 0 20s pod/hpe-csi-node-nk68h 2/2 Running 0 20s pod/truenas-csp-945567544-nqt5h 1/1 Running 0 19s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/truenas-csp-svc ClusterIP 10.43.104.146 <none> 8080/TCP 19s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/hpe-csi-node 1 1 1 1 1 <none> 20s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/hpe-csi-controller 1/1 1 1 21s deployment.apps/truenas-csp 1/1 1 1 19s NAME DESIRED CURRENT READY AGE replicaset.apps/hpe-csi-controller-8585d49b96 1 1 1 21s replicaset.apps/truenas-csp-945567544 1 1 1 19s
The CSI driver is now ready for configuration and how to create persistent volume claims and such is available on SCOD.
Using the “Hello World” example from SCOD, we can observe that provisioning and attaching persistent storage works.
$ kubectl get pvc,pod NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-first-pvc Bound pvc-936c321c-520a-479b-b1b1-21fd6ffa3df0 32Gi RWO hpe-standard 81s NAME READY STATUS RESTARTS AGE pod/my-pod 2/2 Running 0 96s
Yes, it works!
I don’t have anything besides what I’ve shown here to disclose. This is a proof of concept. The HPE ProLiant RL300 Gen 11 is very strongly positioned to cater for cloud-native workloads. If there’s real demand for a robust storage solution accompanied by the RL300, the support for HPE CSI Driver for Kubernetes will naturally follow.
0 notes
Text
Kubesploit Cross-platform post-exploitation HTTP/2 Command & Control server and agent...
Kubesploit Cross-platform post-exploitation HTTP/2 Command & Control server and agent dedicated for containerized environments written in Golang and built on top of Merlin project by Russel Van Tuyl (@Ne0nd0g). https://github.com/cyberark/kubesploit #pentesting #bugbounty #redteam #hackers #docker #kubernetes #scanning
-
0 notes
Link
Kubesploit
GitHub - cyberark/kubesploit: Kubesploit
Kubesploit is a cross-platform post-exploitation HTTP/2 Command & Control server and agent written in Golang, focused on containerized environments. Details: While researching Docker and Kubernetes, we noticed that most of the tools available today are aimed at passive scanning for vulnerabilities in the cluster, and there is a lack of more complex attack vector coverage. They might allow you to see the problem but not exploit it. It is important to run the exploit to simulate a real-world attack that will be used to determine corporate resilience across the network.
When running an exploit, it will practice the organization's cyber event management, which doesn't happen when scanning for cluster issues. It can help the organization learn how to operate when real attacks happen, see if its other detection system works as expected and what changes should be made. We wanted to create an offensive tool that will meet these requirements.
0 notes
Text
Contributing back to Ansible — flexible secrets with some Sops
This post is from Edoardo Tenani, DevOps Engineer at Arduino.
In this blog, we’re going to answer: How does one store sensitive data in source code (in this case, Ansible playbooks) securely and in a way that the secrets can be easily shared with the rest of the team?
Ansible is an open source community project sponsored by Red Hat, it’s the simplest way to automate IT. Ansible is the only automation language that can be used across entire IT teams from systems and network administrators to developers and managers.
At Arduino, we started using Ansible around the beginning of 2018 and since then, most of our infrastructure has been provisioned via Ansible playbooks: from the frontend servers hosting our websites and applications (such as Create Web Editor), to the MQTT broker at the heart of Arduino IoT Cloud.
As soon as we started adopting it, we faced one of the most common security problems in software: How does one store sensitive data in source code (in this case, Ansible playbooks) securely and in a way that the secrets can be easily shared with the rest of the team?
Ansible configuration system comes to the rescue here with its built-in mechanism for handling secrets called Ansible Vault, but unfortunately it had some shortcomings for our use case.
The main disadvantage is that Vault is tied to Ansible system itself: In order to use it, you have to install the whole Ansible stack. We preferred a more self-contained solution, possibly compiled in a single binary to ease portability (i.e. inside Docker containers).
The second blocker is the “single passphrase” Ansible Vault relies on: a shared password to decrypt the entire vault. This solution is very handy and simple to use for personal projects or when the team is small, but as we are constantly growing as a company we preferred to rely on a more robust and scalable encryption strategy. Having the ability to encrypt different secrets with different keys, while being able to revoke access to specific users or machines at any time was crucial to us.
The first solution we identified has been Hashicorp Vault, a backend service purposely created for storing secrets and sensitive data with advanced encryption policies and access management capabilities. In our case, as the team was still growing, the operational cost of maintaining our Vault cluster was considered too high (deploying a High Available service that acts as a single point of failure for your operations is something we want to handle properly and with due care).
Around that same time, while reading industry’s best practices and looking for something that could help us managing secrets in source code, we came across mozilla/sops, a simple command line tool that allows strings and files to be encrypted using a combination of AWS KMS keys, GCP KMS keys or GPG keys.
Sops seemed to have all the requirements we were looking for to replace Ansible Vault:
A single binary, thanks to the porting from Python to Golang that Mozilla recently did.
Able to encrypt and decrypt both entire files and single values.
Allow us to use identities coming from AWS KMS, identities that we already used for our web services and where our operations team had access credentials.
A fallback to GPG keys to mitigate the AWS lock-in, allowing us to decrypt our secrets even in the case of AWS KMS disruption.
The same low operational cost.
Sops’ adoption was a great success: The security team was happy and the implementation straightforward, with just one problem. When we tried to use Sops in Ansible configuration system, we immediately noticed what a pain it was to encrypt variables.
We tried to encrypt/decrypt single values using a helper script to properly pass them as extra variables to ansible-playbook. It almost worked, but developers and operations were not satisfied: It led to errors during development and deployments and overall felt clumsy and difficult.
Next we tried to encrypt/decrypt entire files. The helper script was still needed, but the overall complexity decreased. The main downside was that we needed to decrypt all the files prior to running ansible-playbook because Ansible system didn’t have any clue about what was going on: those were basically plain ansible var_files. It was an improvement, but still lacking the smooth developer experience we wanted.
As Ansible configuration system already supports encrypted vars and storing entire files in Ansible Vault, the obvious choice was to identify how to replicate the behaviour using Sops as the encryption/decryption engine.
Following an idea behind a feature request first opened upstream in the Ansible repository back in 2018 (Integration with Mozilla SOPS for encrypted vars), we developed a lookup plugin and a vars plugin that seamlessly integrate Ansible configuration system and Sops.
No more helper scripts needed
Just ensure Sops executable is installed, correct credentials are in place (ie. AWS credentials or GPG private key) and run ansible-playbook as you normally would.
We believe contributing to a tool we use and love is fundamental in following the Arduino philosophy of spreading the love for open source.
Our sops plugins are currently under review in the mozilla/sops GitHub repository: Add sops lookup plugin and Add sops vars plugin.
You can test it out right away by downloading the plugin files from the PRs and adding them in your local Ansible controller installation. You will then be able to use both plugins from your playbooks. Documentation is available, as for all Ansible plugins, in the code itself at the beginning of the file; search for DOCUMENTATION if you missed it.
If you can leave a comment or a GitHub reaction on the PR, that would be really helpful to expedite the review process.
What to do from now on?
If you’re a developer you can have a look at Sops’ issues list and contribute back to the project!
The Sops team is constantly adding new features (like a new command for publishing encrypted secrets in latest 3.4.0 release, or Azure Key Vault support) but surely there are interesting issues to tackle. For example, the Kubernetes Secret integration being discussed in issue 401 or the –verify command discussed in issue 437.
Made with <3 by the Arduino operations team!
Ansible® is a registered trademark of Red Hat, Inc. in the United States and other countries.
Contributing back to Ansible — flexible secrets with some Sops was originally published on PlanetArduino
0 notes
Text
Sr. Software Engineer – SecOps (Remote)
At CrowdStrike we’re on a mission – to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.
About the role:
Cloud Security Posture Management (CSPM) is a new and complementary product area for CrowdStrike. We’re extending CrowdStrike’s mission of “stopping breaches” into the public cloud control plane and native cloud resources. CrowdStrike’s CSPM offering will give customers visibility into both the (mis)configuration and compliance of native cloud resources, and potential adversary activity involving those resources. When coupled with Falcon, CrowdStrike’s endpoint security offering, our CSPM offering will provide a more comprehensive perspective on how the adversary is targeting key customer infrastructure.
This role is open to candidates in USA (Remote) and Bucharest, Romania.
You will…
Lead backend engineering efforts from rapid prototypes to large-scale applications across CrowdStrike products.
Leverage and build cloud based systems to detect targeted attacks and automate cyber threat intelligence production at a global scale.
Brainstorm, define, and build collaboratively with members across multiple teams.
Obsess about learning, and champion the newest technologies & tricks with others, raising the technical IQ of the team.
Be mentored and mentor other developers on web, backend and data storage technologies and our system.
Constantly re-evaluate our product to improve architecture, knowledge models, user experience, performance and stability.
Be an energetic ‘self-starter’ with the ability to take ownership and be accountable for deliverables.
Use and give back to the open source community.
You’ll use…
· Go (Golang)
· AWS/GCP/Azure/Kubernetes
· Kafka
· GIT
· Cassandra
· ElasticSearch
· Redis
· ZMQ
Key Qualifications:
You have…
Degree in Computer Science (or commensurate experience in data structures/algorithms/distributed systems).
The ability to scale backend systems – sharding, partitioning, scaling horizontally are second nature to you.
The desire to ship code and the love of seeing your bits run in production.
Deep understanding of distributed systems and scalability challenges.
Deep understand multi-threading, concurrency, and parallel processing technologies.
Team player skills – we embrace collaborating as a team as much as possible.
A thorough understanding of engineering best practices from appropriate testing paradigms to effective peer code reviews and resilient architecture.
The ability to thrive in a fast paced, test-driven, collaborative and iterative programming environment.
The skills to meet your commitments on time and produce high quality software that is unit tested, code reviewed, and checked in regularly for continuous integration.
Bonus points awarded for…
· Authored and lead successful open source libraries and projects.
· Contributions to the open source community (GitHub, Stack Overflow, blogging).
· Existing exposure to Go, Scala, AWS, Cassandra, Kafka, Elasticsearch…
· Prior experience in the cybersecurity or intelligence fields
Bring your experience in distributed technologies and algorithms, your great API and systems design sensibilities, and your passion for writing code that performs at extreme scale. You will help build a platform that scales to millions of events per second and Terabytes of data per day. If you want a job that makes a difference in the world and operates at high scale, you’ve come to the right place.
#LI-TJ1
Benefits of Working at CrowdStrike:
Market leader in compensation and equity awards
Competitive vacation policy
Comprehensive health benefits + 401k plan
Paid parental leave, including adoption
Flexible work environment
Wellness programs
Stocked fridges, coffee, soda, and lots of treats
We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.
CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.
from Berkeley Job Site https://ift.tt/2WJda7T via IFTTT
0 notes
Text
Kubernetes Cloud Controller Manager Tutorial for Beginners
Hi, a new #video on #kubernetes #cloud #controller #manager is published on #codeonedigest #youtube channel. Learn kubernetes #controllermanager #apiserver #kubectl #docker #proxyserver #programming #coding with #codeonedigest #kubernetescontrollermanag
Kubernetes is a popular open-source platform for container orchestration. Kubernetes follows client-server architecture and Kubernetes cluster consists of one master node with set of worker nodes. Cloud Controller Manager is part of Master node. Let’s understand the key components of master node. etcd is a configuration database stores configuration data for the worker nodes. API Server to…
View On WordPress
#kubernetes#kubernetes cloud controller manager#kubernetes cluster#kubernetes cluster backup#kubernetes cluster from scratch#kubernetes cluster installation#kubernetes cluster setup#kubernetes cluster tutorial#kubernetes controller#kubernetes controller development#kubernetes controller example#kubernetes controller explained#kubernetes controller golang#kubernetes controller manager#kubernetes controller manager components#kubernetes controller manager config#kubernetes controller manager logs#kubernetes controller manager vs scheduler#kubernetes controller runtime#kubernetes controller tutorial#kubernetes controller vs operator#kubernetes etcd#kubernetes etcd backup#kubernetes etcd backup and restore#kubernetes etcd cluster setup#kubernetes etcd install#kubernetes etcd restore#kubernetes explained#kubernetes installation#kubernetes installation on windows
0 notes
Text
Senior Python - Big Data
Job Details: Job Title Technology Lead | Analytics - Packages | Python - Big Data Work Location Hillsboro, OR - 97124 Must Have Skills (Top 3 technical skills only) 1. Python 2. Apache NiFi 3. Kafka Detailed Job Description: The Senior Platform Engineer will work as part of an Agile scrum team that will analyze, plan, design, develop, test, debug, optimize, improve, document, and deploying complex, scalable, highly available & distributed software platforms. A successful candidate will have a deep understanding of data flow design patterns, cloud environments, infrastructure-as-code, container-based techniques, and managing large-scale deployments. Qualifications ? 5+ years? experience in developing systems software using common languages like Java, JavaScript, Golang or Python. ? Experience in developing dataflow orchestration pipelines, streaming enrichment, metadata management, data transformation using Apache NiFi. Added experience with Kafka or Airflow may help. ? 5+ years in building Microservices /API based large-scale, full-stack production systems. ? Deep industry experience in cloud engineering problems around clustering, service design, scalability, resiliency, distributed backend systems etc. ? 4+ of Experience with multi-cloud (AWS, Azure) and/or multi-region active-active environments with background in deploying and managing Kubernetes at scale. ? DevOps experience with a good understanding of continuous delivery and deployment patterns and tools (Jenkins, Artifactory, Maven, etc.). ? Experience with various data stores (SQL, No SQL, caches, etc.). Oracle, MySQL or PostgreSQL ? Experience with observability and with tools like Elasticsearch, Prometheus, Grafana, or any other open-source. ? Strong Knowledge of distributed data processing & orchestration frameworks and distributed architectures like Lambda & Kappa. ? Excellent understanding of Agile and SDLC processes spanning requirements, defect tracking, source control, build automation, test automation and release management. ? Ability to collaborate and partner with high-performing diverse teams and individuals throughout the firm to accomplish common goals by developing meaningful relationships. Minimum years of experience : 5+ Certifications Needed: No Top 3 responsibilities you would expect the Subcon to shoulder and execute : 1. 5 years in building Microservices API based largescale, fullstack production systems. 2. Deep industry experience in cloud engineering problems around clustering, service design, scalability, resiliency, distributed backend systems etc. Reference : Senior Python - Big Data jobs from Latest listings added - LinkHello http://linkhello.com/jobs/technology/senior-python-big-data_i9758
0 notes
Text
Senior Python - Big Data
Job Details: Job Title Technology Lead | Analytics - Packages | Python - Big Data Work Location Hillsboro, OR - 97124 Must Have Skills (Top 3 technical skills only) 1. Python 2. Apache NiFi 3. Kafka Detailed Job Description: The Senior Platform Engineer will work as part of an Agile scrum team that will analyze, plan, design, develop, test, debug, optimize, improve, document, and deploying complex, scalable, highly available & distributed software platforms. A successful candidate will have a deep understanding of data flow design patterns, cloud environments, infrastructure-as-code, container-based techniques, and managing large-scale deployments. Qualifications ? 5+ years? experience in developing systems software using common languages like Java, JavaScript, Golang or Python. ? Experience in developing dataflow orchestration pipelines, streaming enrichment, metadata management, data transformation using Apache NiFi. Added experience with Kafka or Airflow may help. ? 5+ years in building Microservices /API based large-scale, full-stack production systems. ? Deep industry experience in cloud engineering problems around clustering, service design, scalability, resiliency, distributed backend systems etc. ? 4+ of Experience with multi-cloud (AWS, Azure) and/or multi-region active-active environments with background in deploying and managing Kubernetes at scale. ? DevOps experience with a good understanding of continuous delivery and deployment patterns and tools (Jenkins, Artifactory, Maven, etc.). ? Experience with various data stores (SQL, No SQL, caches, etc.). Oracle, MySQL or PostgreSQL ? Experience with observability and with tools like Elasticsearch, Prometheus, Grafana, or any other open-source. ? Strong Knowledge of distributed data processing & orchestration frameworks and distributed architectures like Lambda & Kappa. ? Excellent understanding of Agile and SDLC processes spanning requirements, defect tracking, source control, build automation, test automation and release management. ? Ability to collaborate and partner with high-performing diverse teams and individuals throughout the firm to accomplish common goals by developing meaningful relationships. Minimum years of experience : 5+ Certifications Needed: No Top 3 responsibilities you would expect the Subcon to shoulder and execute : 1. 5 years in building Microservices API based largescale, fullstack production systems. 2. Deep industry experience in cloud engineering problems around clustering, service design, scalability, resiliency, distributed backend systems etc. Reference : Senior Python - Big Data jobs from Latest listings added - LinkHello http://linkhello.com/jobs/technology/senior-python-big-data_i9758
0 notes
Text
Senior Python - Big Data
Job Details: Job Title Technology Lead | Analytics - Packages | Python - Big Data Work Location Hillsboro, OR - 97124 Must Have Skills (Top 3 technical skills only) 1. Python 2. Apache NiFi 3. Kafka Detailed Job Description: The Senior Platform Engineer will work as part of an Agile scrum team that will analyze, plan, design, develop, test, debug, optimize, improve, document, and deploying complex, scalable, highly available & distributed software platforms. A successful candidate will have a deep understanding of data flow design patterns, cloud environments, infrastructure-as-code, container-based techniques, and managing large-scale deployments. Qualifications ? 5+ years? experience in developing systems software using common languages like Java, JavaScript, Golang or Python. ? Experience in developing dataflow orchestration pipelines, streaming enrichment, metadata management, data transformation using Apache NiFi. Added experience with Kafka or Airflow may help. ? 5+ years in building Microservices /API based large-scale, full-stack production systems. ? Deep industry experience in cloud engineering problems around clustering, service design, scalability, resiliency, distributed backend systems etc. ? 4+ of Experience with multi-cloud (AWS, Azure) and/or multi-region active-active environments with background in deploying and managing Kubernetes at scale. ? DevOps experience with a good understanding of continuous delivery and deployment patterns and tools (Jenkins, Artifactory, Maven, etc.). ? Experience with various data stores (SQL, No SQL, caches, etc.). Oracle, MySQL or PostgreSQL ? Experience with observability and with tools like Elasticsearch, Prometheus, Grafana, or any other open-source. ? Strong Knowledge of distributed data processing & orchestration frameworks and distributed architectures like Lambda & Kappa. ? Excellent understanding of Agile and SDLC processes spanning requirements, defect tracking, source control, build automation, test automation and release management. ? Ability to collaborate and partner with high-performing diverse teams and individuals throughout the firm to accomplish common goals by developing meaningful relationships. Minimum years of experience : 5+ Certifications Needed: No Top 3 responsibilities you would expect the Subcon to shoulder and execute : 1. 5 years in building Microservices API based largescale, fullstack production systems. 2. Deep industry experience in cloud engineering problems around clustering, service design, scalability, resiliency, distributed backend systems etc. Reference : Senior Python - Big Data jobs from Latest listings added - cvwing http://cvwing.com/jobs/technology/senior-python-big-data_i12680
0 notes
Text
Kubesploit Cross-platform post-exploitation HTTP/2 Command & Control server and agent written...
Kubesploit Cross-platform post-exploitation HTTP/2 Command & Control server and agent written in Golang, focused on containerized environments. The currently available modules are: 1. Container breakout using mounting 2. Container breakout using docker.sock 3. Container breakout using CVE-2019-5736 exploit 4. Scan for Kubernetes cluster known CVEs 5. Port scanning with focus on #Kubernetes services 6. Kubernetes service scan from within the container https://github.com/cyberark/kubesploit
GitHub - cyberark/kubesploit: Kubesploit is a cross-platform post-exploitation HTTP/2 Command & Control server and agent written… - GitHub Kubesploit is a cross-platform post-exploitation HTTP/2 Command & Control server and agent written in Golang, focused on containerized environments. - GitHub - cyberark/kubesploit: Kubesplo...
0 notes
Link
Pachyderm (YC W15) -- San Francisco or remote (within North America) -- https://jobs.lever.co/pachyderm/
Positions:
* Core distributed systems/infrastructure engineer (Golang)- You’ll be solving hard algorithmic and distributed systems problems every day and building a first-of-its-kind, containerized, data infrastructure platform.
* Front-end Engineer (Javascript) - Your work will be focused on developing the UI, perfecting the user experience, and pioneering new products such as a hosted version of Pachyderm's data solution.
* DevOps -- Pachyderm is hiring a deployment and devops expert to own and lead our infrastructure, deployment, and testing processes. Experience with Kubernetes, CI/CD systems, testing infra, and running large-scale, data-heavy applications is important.
* Solutions Engineer/Architect -- Work with Pachyderm’s OSS and Enterprise customers to ensure their success. This is a customer facing role that bridges support, product, customer success, and engineering. About Pachyderm:
Love Docker, Golang, Kubernetes and distributed systems? Pachyderm is an enterprise data science platform that offers Git-like version control semantics for massive data sets and end-to-end data lineage tracking and auditing. Teams that find themselves struggling to maintain a growing mess of advance data science tasks such as machine learning or bioinformatics/genomics research use Pachyderm to greatly simplify their system and reduce development time. They rely on Pachyderm to do the heavy lifting so they can focus on the business logic in their data pipelines.
Pachyderm raised our Series A led by Benchmark (https://pachyderm.io/2018/11/15/Series-A.html), so you'd be getting in right at the ground floor and have an enormous impact on the success and direction of the company as well as building the rest of the engineering team.
Check us out at:
pachyderm.com
http://github.com/pachyderm/pachyderm
Comments URL: https://news.ycombinator.com/item?id=21152325
Points: 1
# Comments: 0
0 notes
Text
How to Create a Kubernetes Custom Controller Using client-go https://t.co/STLzfLgZGn #Golang #API https://t.co/LxBO2htf1c
How to Create a Kubernetes Custom Controller Using client-go https://t.co/STLzfLgZGn #Golang #API pic.twitter.com/LxBO2htf1c
— Macronimous.com (@macronimous) September 11, 2019
from Twitter https://twitter.com/macronimous September 11, 2019 at 05:27PM via IFTTT
0 notes
Link
Why Does Developing on Kubernetes Suck? Let me count the ways Published on 21 August 2019 Kubernetes has changed the way I operate software. Whole classes of production problems have disappeared–, arguably to be replaced by others. But such is the way of the world. All told I’m happier operating a microservices app today than I was before I started using Kubernetes. When I’m writing software, though, Kuberentes has only made things harder. In this post I want to walk through all of the problems I have encountered developing software on Kubernetes. Full disclosure: While I work on Tilt as part of my job, which we have designed to solve some of these problems, another part of my job is writing software that runs on Kubernetes. When there’s another tool that solves a problem better than Tilt, I’ll use that. So, how does developing on Kubernetes suck? Let me count the ways. Myriad Dev Environments Minikube, MicroK8s, Docker for Mac, KIND, the list kind of goes on. All of these are local Kubernetes environments. In other words: they’re Kubernetes, but on your laptop. Instead of having to go out to the network to talk to a big Kubernetes cluster (that might be interacting with production data) you can spin up a small cluster on your laptop to check things out. The problem comes when you want to run code/Kubernetes configs on multiple of these clusters because they each have their own… let’s call them quirks. They don’t behave identically to one another, or identically to a real Kubernetes cluster. I won’t enumerate these discrepancies here, but I’ll refer to this problem throughout this blog post when discussing thorny issues like networking and authentication. There’s no tool I’m aware of that solves these problems aside from just doing your development in a real Kubernetes cluster in the cloud, which is the way I prefer to work these days. If you’re in the market for a local dev cluster we recently published a guide to choosing a dev cluster to help make sense of all the options. Permissions/Authentication If you’re developing software on Kubernetes, you’re probably using Kubernetes in production. If you’re using Kubernetes in production, you probably have locked down authentication settings. For example, a common set up is to only allow your developers to access to create/edit objects in one namespace. To do this you need a bunch of things set up: A role A rolebinding A secret This works great if you’re using a “real” Kubernetes cluster, but as soon as you start using a local Kuberentes setup, things get weird. Remember all those local dev environments? Well turns out some of them handle RBAC very differently than you might expect. I ran into an issue where kubeadm had access control set to allow everything. As a result, I had false confidence going to prod that my settings actually restricted permissions, when in fact they didn’t. Granted, kubeadm-dind-cluster has since been deprecated, but it goes to show that not all Kubernetes clusters are created equal. I also ran in to another problem trying to reproduce the issue on Docker for Mac’s Kubernetes cluster where RBAC rules were not enforced. Testing things like NetworkPolicies is also fraught. NetworkPolicies don’t work at all on Docker for Mac or microk8s and require a special flag for Minikube. The troubles in networking land don’t end there. Network Debugging Network ingress is one of the most important things that Kubernetes does. Unfortunately, ingress is implemented differently by different cloud providers and is, as a result, very difficult to test. The different implementations also support different extensions, often configured with labels, which are definitely not portable between environments. I’m lucky enough to have access to a staging cluster to test ingress changes, but even then changes can take 30 minutes to take effect and can result in inscrutable error messages. If you’re on a local Kubernetes environment, you’re pretty much out of luck. Networking is my least favorite thing to work on in Kubernetes, and the area that I think still needs the most love. There are a couple tools that help out, however. One nice tool for at least seeing how your services are connected is Octant. Octant gives you a visual overview of all of your pods and which services they belong to. At least with Octant I can easily go from a piece of code to the way that is connected to the internet. For complex Kubernetes objects like ingresses that behave differently on different cloud platforms, Kubespy is an invaluable tool. Kubespy shows you what is happening under the hood when objects are created. For example, if I create a service it shows which pods at which IP addresses Kubespy will serve traffic to: kubespy trace svc test-frontend [ADDED v1/Service] default/test-frontend [ADDED v1/Endpoints] default/test-frontend ✅ Directs traffic to the following live Pods: - [Ready] test-frontend-f6d6ff44-b7jzd @ 192.168.1.1 Logging into a Container and Doing Stuff A common thing that every developer reaches for is SSH. Maybe in the future, SSH will be as anachronistic as the floppy disk icon, but for now, I want to log in to a container, poke around, see what the state is and maybe run some commands like strace or tcpdump. Kubernetes doesn’t make this easy. The workflow looks something like this: kubectl get pods Look for my pod name kubectl exec -it $podname -- /bin/bash Here’s where things get annoying. kubectl exec -it dan-test-75d7b88d8f-4p45c -- /bin/bash OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown command terminated with exit code 126 What the heck is this? I know that in production I want my container images to be small (both for image push performance and security) but this is a bit much. Fine, let’s live in 1997. Just let me run strace. # strace /bin/sh: 1: strace: not found Ugh. It’s reasonable that this image doesn’t have strace, and kind of reasonable that it doesn’t have bash, but it highlights one of the Kubernetes best practices that makes local development hard: keep your images as small as possible. This is so annoying in dev. I don’t want to have to keep installing strace all afternoon as my container gets restarted. I also don’t want to add strace to my production image, for security reasons, and also because it would increase the image size I’d be pushing up and down to my registry, which would slow down remote cluster deploys. One of my favorite tools that solves this problem is Kubebox. Kubebox makes it easy to see all your pods and you just need to press ‘r’ to get a remote shell in to one of them. Anyways, large images aren’t a problem on local clusters, right? But I suppose if you really wanted to, you could put strace and bash on all of your development images–because large images aren’t a problem on local clusters, right? Pushing/pulling images Pushing bits around on your laptop should be super fast because there’s no need to go out to the network. Unfortunately, here’s where the myriad local Kubernetes setups rear their ugly head once again. Let’s talk through the happy path: Minikube and Docker for Mac. Both of these setups run a Docker daemon that you can talk to from your local laptop and from inside the Kubernetes cluster. That means that all you need to do to get an image into Kubernetes is to build it; your pod can “pull” it directly from your local registry, and doesn’t need to deal with moving data over the network. MicroK8s doesn’t ship with a in-cluster registry turned on by default, but it can easily be enabled with a flag. In contrast KIND is a whole other beast. It has a special command that you use to load images into the cluster, kind load. Unfortunately it is unbearably slow. $ time kind load docker-image golang:1.12 real 0m39.225s user 0m0.438s sys 0m2.159s This is because KIND copies in every layer of the image and only does very primitive content negotiation. What this means is that if you change just one file in the final 15 KB layer of your 1.5 GB image KIND can copy in the entire 1.5 GB image anyways. Fortunately the kind folks working on the KIND project have made a bunch of improvements to image loading recently. We’ve also released a proof of concept for running a registry in KIND which should help improve speeds further. If I’m going to be using a local development environment I tend to go with Docker for Mac or MicroK8s, though as I stated earlier, these days I prefer to do my development in a real cloud Kubernetes cluster. There are also great tools emerging in this space. Garden caches image layers in a remote registry, reducing what needs to be rebuilt by each developer. Tilt with live_update helps me bypass the need to push and pull images altogether so that’s what I use to solve this problem. Mounts/file Syncing Even if you can avoid going out to the internet when pushing an image, just building an image can take forever. Especially if you aren’t using multi stage builds, and especially if you are using special development images with extra dependencies. When compared to a hot reloading local JavaScript setup, even the fastest image build can be too slow. What I want to do is just sync a file up to my pod. Doing this by hand is relatively simple, but tedious: Plus if your container is restarted for any reason, like if your process crashes or the pod gets evicted, you lose all of your changes. There are tools like ksync, skaffold and Tilt that can help with this, though they take some investment to get set up. Logs/Observability/Events In dev, I want to tail the relevant logs so I can see what I’m doing. Kubernetes doesn’t make that easy. Each Kubernetes pod has its own log that I have to query individually, and each of these can have many containers. Naively, it’s easy to see the logs for just one pod (kubectl logs podname). However, to see an aggregate view, you need to know a lot about how your pods are organized, like which labels apply to which pods in which parts of your app so that you can run a command like kubectl logs -l app=myapp. Then there’s Kubernetes events, which you watch via an entirely separate command. This sucks because it’s in the event log that I’ll find important dev information like if my pod couldn’t be scheduled or the new image I pushed up can’t be exec’d. There’s a great set of observability tools I can use that help with this in production, but I don’t want to be running them locally. Sometimes I can’t afford the resources–my laptop is rather constrained. And while those tools are each excellent in their niche, I’d rather have one tool that makes it easy to do common tasks. In other words, I should always be able to start my debugging in one window. Some problems may be so unique or special that I end up going to other tools to resolve them, but I can’t be having to tab through twelve windows just to check for one problem. I explore this problem in a different blog post, and I think Tilt solves this well especially now that it includes Kubernetes events alongside pod logs. The aforementioned Kubebox and garden are two other great options. Conclusion While developing on Kubernetes still sucks, we’ve come a long way just in the past year. The biggest remaining hole that is begging for a solution is networking. If we want to empower developers to create end-to-end full stack microservices architectures we need to provide some way to get their hands dirty with networking. Until then that last push to production will always reveal hidden networking issues.
0 notes