deployhub-blog
DeployHub
19 posts
Microservices Configuration Management Tools
Don't wanna be here? Send us removal request.
deployhub-blog · 5 years ago
Text
Microservices Architecture – an Overview
Tumblr media
As we shift from monolithic to microservices, we need configuration management tools to track versions of microservices and their dependent application versions overtime. How will we see which microservices are running in a cluster, their versions, how they got there, and which applications are using them? It can be a mess. While microservices move us away from traditional build and release approaches, we still need a method of tracking the higher level ‘application version.’ The challenge is then how to track your application versions? How can we know which versions of the microservices the application consumes?
We need tools to gather the meta data into a single version of a microservice, or component (web components, database updates for example). This information is initially collected when you define your microservice, for example when working within the DeployHub catalog. You can collect this data via our interface or using a CI/CD plugin such as Jenkins, JenkinsX, Tekton, CircleCI, Google CloudBuild or Puppet Nebula.
You’ll build a logical view of your application by first defining your package and assigning which version of the microservices it consumes. Once your application has been defined, your CI/CD process will call DeployHub to automatically increment the application version if a microservice it consumes is updated.  As microservices are consumed by applications, these tools track the dependencies.  At any point in time you’ll see which version of the microservices your application is consuming, how many different versions have been deployed to your Kubernetes cluster, and who is using the same microservice. In effect, you now have a map that displays this data overtime.
You can expect to be managing thousands of microservices in your Kubernetes cluster. You’ll need to see your microservice inventory along with all configuration management details. DeployHub integrates with your CI/CD process to continually update new versions of your microservices that in turn creates new versions of your applications. With our inventory system, you’ll always know which version of a microservice your application version is dependent upon. In other words, we offer tools for microservices configuration management. I hope this helps.
0 notes
deployhub-blog · 5 years ago
Text
Why is configuration management so important in 2020?
The best companies have shifted from monolithic software development to microservices, and yours will be next. But what does that mean, and are you ready for the shift from managing a datacenter to a Deathstar?
Yes, a Deathstar.  How are you planning to navigate yours?
Tumblr media
With a modern architecture, there’s a Kubernetes cluster with nodes and pods running individual containers. Each execute a microservice, talking via APIs.  That alone opens up a ton of questions we need to address. Where are our applications, which version of a service is the application using, and lastly who is using my microservice? In other words, microservice configuration management just became critical.
In the monolithic world, software configuration management is performed mainly at the version control and compile step. Source objects were stored in version tools, then a compile/link process put it all together and we automated with continuous integration. In microservices, all the linking is done at runtime, and source becomes small snippets that need no branching and merging. It can be a right mess.
What then is the solution?
There are a few products out there for working with microservices but not all offer a full set of tools. You need to find one that catalogs, versions, tracks and deploys independent microservices to the applications that consume them. Such as? Well, there’s our product, both a free Team version and an Enterprise Pro option. Yes, DeployHub creates a single source of truth, exposing which microservices are being used, if they need to be deprecated, and/or which applications are using them. In short, version tracking, catalog and release with confidence and clarity. You’ll need a full map of your applications and the services on which they depend. With DeployHub making it easier to migrate to and manage your shared microservices, you can navigate that Deathstar. You’ll have the tools to manage your configurations in this evolving software architecture and that can only be a good thing for all of us.
0 notes
deployhub-blog · 5 years ago
Photo
Tumblr media
Stateless Authentication with JSON Web Tokens ☞ https://morioh.com/p/0e76e61af8d5/stateless-authentication-with-json-web-tokens
#nodejs #javascript
3 notes · View notes
deployhub-blog · 5 years ago
Photo
Tumblr media
This Dashcam’s Flawed Design Let Us Track Drivers in Real-Time Across the US https://ift.tt/35XjXgZ
4 notes · View notes
deployhub-blog · 5 years ago
Text
Pivoting your Pipeline Deployments from Monolithic to Microservices
Webinar
Wednesday, February 19th, 10AM PT – Zoom via CircleCI hosting
If you are working in a microservice architecture, you may have found that managing microservices in a CD workflow is very different than what you are used to.   In this webinar, we will explore how to pivot from a monolithic CD model to a microservice based model where small services are independently deployable across many workflows.  This webinar will cover the different tools that you will need to include in your orchestration pipeline and will show how this shift requires updates to the CD process in order to reign in, track and control the many moving parts of a microservice based approach. 
We will explore how the microservices pipeline will create 4 main pipeline challenges: 
Shifting from monolithic application releases to independently deployable services;
• Tracking microservices to the logical view of an application;
• Finding and sharing reusable microservices; 
• Tracking microservice versions and configurations.
Because your existing pipeline process will need to be tweaked to fit into a microservice model, we will break down the challenges providing you a great starting point for building out your future K8s Pipeline using CircleCI for pipeline orchestration and DeployHub for configuration management and release.  
0 notes
deployhub-blog · 5 years ago
Video
youtube
See how to navigate your microservice, and the challenges of a Kubernetes deathstar. Don't be scared. We have a map for you.  https://youtu.be/XTlyNRVcEyk
0 notes
deployhub-blog · 5 years ago
Text
Migrating to Microservices with a domain driven design
Migrating to Microservices may seem like a daunting task. That is because organizing your reusable components and services is key to a successful implementation of a service-based architecture. Because it is so key, it should be the first step you take in your modern architecture journey.  This is often called Domain Driven Design (DDD).   The good news is that you can start the process by looking at your current monolithic application in terms of components and Domains.  From that perspective you can slowly and successfully see what migrating to microservices will look like. DeployHub supports both monolithic and microservice releases. It facilitates the migrating to microservices by leveraging the use of Domains.  Domains are critical when you begin to decompose functions into independently deployable services.
To understand this first step in migrating to microservices, let’s break down a monolithic into components. If we take a simple web store application, you may have a .jar file, the database, the infrastructure components and environment variable settings.  While the .jar file may be your primary focus, you need to start seeing the database, infrastructure and environment variables as independent components of your overall application. You can build a strategy for managing each of these layers as individual services.  DeployHub does this using Domains.  Domains track, catalog and expose reusable components allowing them to be easily shared, a critical step in the shift to microservices.
Your monolithic Domains may include:
·         Infrastructure Components (java runtime, Tomcat, Environment. Variables)
·         Database and SQL (.sql)
·         The Website (.jar)
To visualize this, we can use a triangle that shows the lowest to highest level of dependencies.  This is a simple organizational method that can be useful for our more complex service-based architectures.
Monolithic Dependencies
With DeployHub, these Domains are controlled by different User Groups and can contain lower level Sub-Domains.  For example, Operation Teams may control the Infrastructure Domain, DBAs the middle tier and developers control the website frontend. As you break down your application into services, you will expand on these Domains.  Starting at this monolithic level helps you begin thinking ‘functions’ and prepares you for the next step.
Migrating to Microservices with Domains
One of the goals for Domain Driven Design is taking an application and breaking it down into its smaller parts. We then organize those parts in a fashion that will allow reuse.
If we think about the potential microservices for a simple store website, we’ll have: • the website frontend, • a cart, • credit card processing, • a check out system, • advertisements and recommendations, • sign-up, • login, • shipping, • taxes.
We start by expanding the triangle. Think about what’s most common. Like our monolithic, the lowest level is infrastructure and database. In our service-based architecture, these are followed by credit card processing and the login/sign up. Above that, we may have the cart and on top of that are the more specific pieces such as the ads and recommendations. At the peak of the triangle is the frontend.
Microservice Domains
This triangle shows how we build up from the most common pieces and represents how we would define Domains. Each section of our triangle becomes a Domain.
Domains and Sharing
Say we have another team that’s writing a store website, Website B. With DeployHub they find common pieces based on the Domains. Once you define your Domains, your developers can begin publishing and deploying microservices under the Domain structure making it possible for other teams to easily find and then reuse the services. When our next team creates a new store, they begin the process of re-use.
Shared Microservices
Between Website A and B we have accomplished a level of reuse for all the shareable layers from the infrastructure to the cart. This allows us to write the login, sign up, the cart, and the credit card processing once.  DeployHub facilitates this level of design by supporting a highly customizable Domain structure allowing you to define what your triangle looks like and then Domains make it easy to find and share the microservices.
The OOPS in Object-Oriented Programming – Let’s Not Repeat that Mistake
Object Oriented Programming (OOPS) attempted to achieve a similar level of reuse. Part of the problem of OOPS was the inability for teams to accurately run builds where linking was carefully managed from a central ‘common’ location. To solve this problem, developers would add the ‘common’ object, like our login service, to their local repository so the build could use it. Their build would always reference that version of the library. Updates to the library would be private. There would be little, if any, contribution of updates to the ‘common’ version. This mistake could be easily repeated in microservices. That would be unfortunate and represent a failure in the implementation of a service-based architecture.
The question always is ‘how backward compatible is the new versions?’ Are we breaking any signatures on the methods? If we’re not breaking any of the interfaces, then can we consume those latest versions? And which application is consuming what version of a service?
For example, website B is going to use v2 of the login and website A will use v3. We need to track which Website is using which version. If you want to deprecate v2, it will require Website B move to v3. What is needed is a method for impact analysis to accurately make these types of decisions. This tracking ability is key to a successful microservice implementation and part of DeployHub’s Domain dependency management features. DeployHub provides this level of tracking, versioning and reporting to make every microservice deployment successful and visible for high frequency updates.
Further Reading on Domains
For more information on DeployHub’s Domains, refer to the User Guide Section on “Using Domains.”
More Information
Versioning your Deployment Configuration for a Single Source of Truth
Domain Driven Design using Microservices
Migrating to Microservices – Step 1: Define your Domains – Whitepaper
Microservices and Components
DeployHub’s Version Engine
Drive your Deployment Process using the Jenkins Continuous Deployment Plug-in
Track Component to Endpoint with a Feedback Loop
DeployHub and Jenkins – This Demo shows how DeployHub interacts with the Jenkins pipeline (including the blue ocean visualization).
DeployHub Team Sign-up –  The hosted team version can be used to deploy to unlimited endpoints by unlimited users.
Get Involved in the OS Project – Help us create the best, open source continuous deployment platform available.
https://www.deployhub.com/jenkins-continuo…ployment-plug-in/
0 notes
deployhub-blog · 5 years ago
Text
On the State of Tech Funding in NM
How Can New Mexico Create a Thriving Technology Economy?
Let’s be honest here, the technology startup scene in New Mexico is somewhat lacking. There is to be sure, the Labs (Sandia, LANL etc), a source of a certain kind of technology but when you compare New Mexico’s economy to our neighbors in the region, we’re so far behind as to be embarrassing.
In 2017, our state is “The highest in the nation in general unemployment, skilled labor is well below the national average” according to a report cited on the NM Tech Council website, (Robert Half Technologies, 2018).
The New Mexico Technology Council is a member-based business that advocates for, promotes, and represents technology companies in the area. Since 1972, NMTC has slowly expanded to now have 160 IT/Tech related companies. Their focus, in a 2017 report, stated that they encouraged more STEM education and training programs, encouraged Tech Based Economic Development, and wanted to address the need for more training to create a suitable software workforce. The website also puts out a Call for Action: Outsource your Software Development to New Mexico!” Their launch, when giving reasons to software businesses to set up shop here, talks of low business costs, quality of life, and the skilled workforce. Didn’t they just contradict themselves? Well, that aside, let’s look at how we’re doing statistically compared to Arizona, Utah, and Colorado. Oh, let’s look at Kansas and Oklahoma too.
First though, the backstory and then some definitions to clarify what I’m looking at here and why you should care.
This spring, I realized that my little side project, running an online sharing platform for travel blogs has a potential to become a profitable and scalable technology-using business, employing others. With a global reach of 58,000 views in the first year with only $350 spent on marketing costs, and with readers from 114 countries and writers from 64, the website (www.wanderlust-journal.com) has expanded beyond hope. Once I saw these numbers, I contacted a few successful business entrepreneurs and incubators for advice. The responses varied considerably; SCORE told me to get QuickBooks and WESST suggested looking for venture capitalist funds. I went with the latter, which leads me to the following definitions.
I’d not thought of my website as a technology startup but in some circles, it is. I’d thought of tech as being software development, those behind the scenes geeks coding in little cubicles with mugs of Starbucks and Bluetooth earbuds, nodding off to some inner enthusiasm for numbers.
Technology is many things to many people. For now, I’ll give you the definition of software by the BSA Foundation, an association that advocates for the software industry around the world. Their website, software.org, explains that the modern definition of technological advancements no longer is limited to only tangible and packaged software products. The term now includes cloud-based software as service (SaaS), cloud storage and computing, mobile application development, hosting, software publishers, computer system design, data processing and internet publishing/websites/broadcasting, my own area of interest.
The BSA Foundation reported on the Economic Impact of Software (2017) by compiling information from US Bureau of Labor, US Census, US Bureau of Economic Analysis and others. I focused on software jobs’ growth as a percentage, the total number of jobs both direct and indirectly created by software technology companies, and the direct value to the state’s GDP, all within a four-year framework of 2014 to now.
Arizona had a 21.21% increase in software jobs, with a total of 99,625 such jobs, and a $6.7 Billion value added to their GDP.
Colorado’s software jobs grew by 12.2% with a total of 140,271 tech related jobs, and $14 Billion.
Kansas grew by 37.5%, a total of 38,382 jobs, and $3 Billion.
Oklahoma grew by 11.8%, a total of 24,706 and $1.8 Billion.
Utah was up 13.3%, to 98,282 jobs, and $5.9 Billion.
New Mexico’s software jobs grew by 14.8%, to a total number of direct and indirectly software related jobs to…9,993. The direct value added to the NM GDP was $874 Million.
After reading this report, my question shifted from worrying about my own selfish wish for funding to what are these states doing that we’re not? How can we become a thriving software economy like our neighbors have? Let’s look first at one of these states, checking the facts and figures and then I’ll address what I found to be the case for New Mexico.
Utah ranks as 7th for venture capital activity. Over the last 20 years, Utah’s expanded from 1500 tech companies to 6,700. Their technology industry provides over 302,000 jobs. Twenty-five years ago, they had only one VC Fund. Their Governor, Michael O. Leavitt, called it “a whopping $20 Million.” (Remember this figure, it comes back into conversation.) Leavitt actively went to the big tech companies in Silicone Valley and asked what could he, and Utah, do to become a viable destination for their businesses to move their headquarters? He was told that in order to bring in tech jobs to Utah, he needed to invest in creating a skilled workforce. Since 2007, the Utah Engineering Initiative has grown to more than 40,000 students graduating with relevant skills at higher education levels. In the last three years alone, Utah has added over 17,000 small businesses (not just technology) with approximately another 400,000 new jobs created.
New Mexico is not quite so impressive for venture capital activity although we too have $20 Million available, just like Utah did — twenty-five years ago.
With $5M from US Treasury, $10M from NMSIC, and $5M from private institutional investors, New Mexico has the Catalyst Fund. The program was implemented in 2016 to invest in locally based technology startups who would benefit the state’s economy. Sun Mountain Capital, a private investment partnership, acts on behalf of this Catalyst Fund. They have the job of finding, vetting, and investing in approximately 8–10 Venture Capital companies to act as portfolio fund managers, who themselves then are responsible for investing in technology startups. The average investment into these Portfolio funds was meant to be about $2M. These chosen VC companies must then raise at least 60% matching capital. The Executive statement of 2016 says that Catalyst Fund was to invest in fifty tech startups within three to five years. It’s now 2019, three years in, and it’s been challenging and interesting to ask where that original $20M of federal money is now. Who’s benefitted from the funds? Bear in mind, these funds are matched by the companies working as portfolio managers, so somewhere there’s approximately $40 Million waiting to be invested in the NM tech economy.
I spoke to a couple of partners at Sun Mountain Capital, who explained how the process of finding and vetting appropriate VCs to hold the Portfolio Funds took longer than expected. Another Venture Capitalist said, (quite candid once promised anonymity), “The decision to have six companies to manage the CF investments is questionable; some of these VCs have limited experience in the field of tech startups, they’re unfamiliar with the challenges and needs of the industry or staffing.” As part of my research, I also talked to Dorian Radar from NMA Ventures. When I asked for her opinion on the current situation of the tech startup industry, she said, “It’s growing but there are problems with outreach. Five years ago, in Albuquerque, there were many business accelerators and little funding. Now it’s the opposite, more funds than tech startups and not enough entrepreneurs know what’s available.”
When I asked four of the six Catalyst Fund Portfolio Managers how they define ‘tech’ in this context, the range of answers covered medical, customer software for sales and marketing, and hard sciences for bio/agriculture industry. Not one mentioned or focuses on business to business software development. One VC replied that it’s “a wide category that includes Meow Wolf since they use so much technology,” although I’ll admit to thinking of Meow Wolf as an immersive entertainment experience and not a tech startup. Another interviewee mentioned that if the startup were innovating software, then their product should be patented. Unfortunately, that’s not a realistic case in this industry. Big corporations such as IBM and Microsoft no longer innovate or develop software; they buy smaller companies who already have the adoption. Think about the modern definition of technology and software that I mentioned earlier. Perhaps the fund managers’ understanding (of the field they’re meant to invest in) is outdated and needs a good reboot? VCs, I’m talking to you.
So, who is getting funded with the Catalyst Fund?
“Targeted companies will originate from New Mexico’s research universities and national labs, as well as from the private sector,” Sun Mountain stated in their Executive Statement in 2017. This seems to be the consistent focus as in an article online at Santa Fe Today (2018) repeated the idea: “New Mexico is well leveraged to develop and commercialize innovative technologies from its research universities and three federal laboratories.”
Cottonwood and Tramway invested in BennuBio, a hard science/health company. The second success story is BayoTech, another hard science/agriculture focused company based in Albuquerque, who had help from Cottonwood and Sun Mountain. NMA Ventures recently invested in Fusion Funnel, a sales and marketing tool software company for $300,000. These examples were all I could find on Crunchbase, Discover.org, through various web searches and in conversation with four of the main six investors in New Mexico. I’d love to be corrected if my findings are wrong. Why? Because it brings up the lack of transparency and oversight. Why is it so hard to know who has benefitted from the federal govt’s fund for our state? Why has it taken me so long to only uncover those three?
Compared to other states, NM is the only one that tries to build technology companies with seed rounds under $500K. This approach may work with smaller cottage industries like mine but is somewhat unrealistic for tech companies. According to some, it’s safer to invest at this initial amount to test out the idea before investing at a higher level of $1.5M with an expected growth to $100M. That initial round of funding at $300K is only enough for a small team over six months — not enough time to create a product, much less understand customer needs or product market fit. Bearing in mind that only three such companies have received Catalyst Funds so far; we know that NM still has the money. The recruitment of experienced CEO, management, and staffing is still the issue. How then do we create the ideal of innovation, management, and money?
On May 9th, 2019, Mayor Webber hosted an event called the Santa Fe Tech Industry Spring Convening. The question raised at the meeting was how do we develop a thriving, sustainable technology industry? And who supports such efforts? Is this all talk and no action?
Matt Brown, from the Office of Economic Development spoke of how many tech business startups have 1–5 employees and are still in their first five years of development. Matt mentioned the obstacles of funding, recruitment, and education, and the need for a game plan to work together across the sector. Peter Mitchell from the NM Economic Development then gave an overview of state-run programs. The NM Tech Council is promoting the state as a destination for new software businesses to move here, citing lower running costs, lower salaries (as if that’s a good thing), and ample skilled workers (which I question). NM Tech Works offers basic coding, website building classes, and social media type skills. The NM Job Training Incentive Program has supported the creation of 46,000 jobs within 1500 businesses in the last 47 years. We also have the programs such as the Small Business Development Center, WESST, Arrowhead, Startup New Mexico, SCORE, not all are focused on software or technology, but software relates to all aspects of our lives these days whether it’s acknowledged as such. Still, it looks like we’re actively looking for solutions.
Mayor Webber joked that “Santa Fe is one large TED Talk,” a combination of the creative and technology minded. Many of us living here are freelancers, independent, creative, and educated, looking for work through networking opportunities. The state has a history of history, technology, art, entertainment, film, tourism, service industry. He asked, how do we connect across the different areas to share our expertise and experiences to grow our economy together?
Dorian Radar, one of the panelists, talked of how a few years ago, there were many more events and conferences, almost too many. Nowadays, she said, there are less events and outreach opportunities to reach those in rural NM with great ideas and products. NMA Ventures and NM Angels hold Office Hours in Alb and online/virtual for those with ideas to get suggestions, references, ideas. More coaching opportunities are still needed for startups in my opinion, the information is hard to find, and the path to funding is difficult to maneuver.
Others mentioned how there’s problem with creating an ecosystem for employees. There’s money to invest, there are some great ideas here, but there’s not enough of a skilled or experienced workforce. It’s a concern and a challenge to bring in qualified leaders and managers. Paying high enough wages is only one aspect of the struggle to bring in qualified managers for any businesses. People might move here with an offer of work but what happens if they want to change companies? Who else can they work for, especially in the tech industry? Employees need to cross-pollinate, as in, be able to find other local work in the industry rather than move away, following the money and funds. Many of the tech staff moving here are young adults, with families, interested in settling long term.
At this meeting, the Mayor stated, “Denver and Austin are losing their souls” with their big tech growth. He even said, “you can have tech sophistication, growth, investment but we’ll keep it our way.” He then introduced Descartes Labs on the panel for discussing the state of technology in New Mexico.
Descartes Labs, Skorpius, and RiskSense are often mentioned as role models for other tech startups. However, Skorpios received seed funds from out of state and once at Series B rounds only then did Sun Mountain and Cottonwood invest. RiskSense got seed funding from out of state and at Series A, Sun Mountain invested. They opened an office in CA to attract funding. Mark Johnson, the CEO of Descartes Lab, did all the hard work himself, he brought in both funding and higher management from Silicon Valley and other states to complement the core group from the Labs. None of these companies received Catalyst Funds or financial help as true startups, they were only invested in locally after they’d proven themselves through their own efforts.
The Mayor talked highly of investing time and money into new solutions and programs, networking across the existing technology community in the state. He mentioned the need for affordable housing, jobs for the trailing spouses, but said little of investing in STEM education (science, technology, engineering and mathematics), internships’ training and he seemed interested more in quality of life and creating a downtown innovation district.
After the meeting, I handed him my notes on what the neighboring states are doing, the original version of this article, and he agreed to meet with myself and Tracy Ragan, of DeployHub, a central microservice sharing platform for software developers. (I work for DeployHub as a Communications Officer part time.) We’d wanted to share our findings on funding for tech startups, coming from both ends of the spectrum, with my customer focused product and hers a business to business for enterprise software development. We had an appointment for June 13th at 10.30am. The Mayor’s secretary, once we’d driven from Madrid to Santa Fe, rescheduled for half an hour later. We returned only to be postponed for another week. The next day, a call from the Mayor’s office cancelled our upcoming meeting. They wouldn’t reschedule for another time. It was most odd to put it politely. Especially after all his talk of being open to coffee with his constituents, community building, and sharing information.
I’d hoped to give him my research on how Utah, Arizona, Colorado, Kansas and Oklahoma are all creating huge technology growth, impacting their states GDP in the millions, improving median incomes, lowering unemployment, training tens of thousands of students with technical skills, and creating anywhere from 24,000 to 140,000 tech jobs. Unlike in New Mexico with our 9,000 jobs. I’d wanted to give him their recommendations and share the programs that worked. We don’t have to do it “our way” as the Mayor says, we can learn from others and collaborate by reading their roadmaps to success for software and technology startups.
In the meantime, I’ll share with you what these neighboring states say contributed to their economic growth. These are the steps and programs they have implemented with incredible success, their recommendations.
1. Mayors and Governors to actively approach the technology corporations and find out what they need in order to move to our region.
2. Create community: sponsor monthly social events for local tech industry people to gather, chat, talk shop, share ideas, ask for suggestions, and build strong network.
3. State-run meetings: Each quarter host a gathering for tech industry that is educational, and network/skills based with a featured speaker (startup leader etc.) followed by breakout sessions on various topics of interest.
4. Websites: User friendly, easy access to resources, to training programs, to events, and relevant information that is updated regularly, with clear links that work.
5. Education/Training programs:
- Create scholarships for higher ed training in tech
- Internships (e.g. Talen Ready Utah’s certificates/hands on training while in high school)
- New businesses sponsor in-house internships as part of the initial funding agreements
- Provide training programs across the state at all levels of entry into field
- Higher education degrees (not only entry level skills)
6. Invest in education from both sides: Pay teachers well and keep talented workforce committed to the state which then creates an educated, well-paid and engaged populace, more likely to stay. Support staff, new textbooks, upgraded technology and infrastructure.
7. Taxes: To develop and pass pro-technology legislation. Easier and business friendly regulations that respect the needs of different business needs. Research & Development (R&D) tax credits, and others that promote incentives for more investments for startups.
8. Physical Clusters/real estate for tech companies who can collaborate, share info, resources, and buildings to create a stronger micro-community/economic base. (e.g Utah created clusters and real estate by voting to move state run prison off prime location along their business corridor)
9. State-wide Collaboration: Connect with other cities to work together, share programs, resources, information across Santa Fe, Alb, Las Cruces etc. Connect. Develop. Work together.
10. Promote these partner programs, the infrastructure of light-railway, housing, incubators, co-offices etc. and not only the quality of life aspect when courting investors and businesses.
11. Annual Tech Summit with featured speakers, startups, investors, from the region, spread over two days, treat the investors to explore the area and colleges as Utah did with great pro-active success.
12. Inclusive/Diversity: When promoting each state for new tech businesses, be wary of language that is divisive, xenophobic or limiting. Encourage outsiders. Welcome investors and companies. Invite them to visit, ask them to talk at local universities and schools in exchange.
And lastly, I’d like to return to a specifically New Mexico concern: The Catalyst Fund.
Created three years ago to invest $20M in tech startups, it now has approximately $40M with the matching funds. It has not done its job. There needs to be oversight, accountability, and transparency. Who benefits from holding onto the funds? Why is it so impossible to discover who has received CF $? Which tech companies have received the funds, how much, and why were they chosen?
As far as I can tell, the local venture capitalists could
- Take more financial risks
- Research what technology (software) is these days
- Better understand the various needs of tech startups
- Invest at seed instead of waiting for second rounds of funding
- Be open to soft science tech/software business to business development
- Consider larger initial funds so the startup caps tables aren’t impacted negatively
- Learn more about the industry
- Update the Catalyst Fund website regularly with full information, links to the fund managers and lists of who has received help and why.
- Focus on outreach and education about the money available
- Research other state’s investments and successes
And finally, please, simply invest in more tech startups.
New Mexico needs to play catch up as without greater investments, pro-technology legislation, and the government physically out on the field, we’re a losing team. After all my research into our neighbors’ successes and the situation here, I believe that it’s a matter of scope. We need to expand the dollar amounts invested, increase the number of larger sized startups, build more STEM programs in high schools and higher education, outreach to our rural communities with more access to training, and actively seek technology businesses to move here. We can grow into a thriving technology economy, we have the beginnings of great lifestyle, climate, real estate, a need and hunger for training and skilled employment, ideas and prototypes for tech startups. Let’s expand what we’re already achieving and aim higher, wider, and claim our state as one to invest in. We’re doing it. We just need to expand our idea of what success looks like in New Mexico.
Startup
Nm Tech
Technology
Funding
New Mexico
0 notes
deployhub-blog · 5 years ago
Text
Running Blue-Green Deployments
According to Cloud Foundry, “Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green.” The ability to easily deploy between two environments is easily supported by DeployHub through the definitions of DeployHub Environments. Your DeployHub Application (collection of components like microservices) can be assigned to multiple Environments and deployed in a ‘rolling’ fashion or in this popular Blue-Green round robin process. With DeployHub, you define both Environments for your Application and push your deployments accordingly. DeployHub will track the endpoint history, of each Environment, including versions of each component that was released, Application Version, date, time and User. It is important to understand that in order for your Blue-Green deployments to become ‘active’ to your end users, you will need a load balancer that redirects users after the deployment has been successfully completed. DeployHub is not a load balancer, but it can communicate with your load balancer as a ‘post’ deployment step to re-route users.
DeployHub and Cloud Foundry
Blue-Green deployments are most commonly associated to Cloud Foundry. Cloud Foundry is an open source cloud platform as a service (PaaS) on which developers can build, deploy, run and scale applications on public and private cloud models.
Developers interact with Cloud Foundry by running a command line tool (cf) which then interacts with the Cloud Foundry Endpoint. With DeployHub, you release a new Application (or make changes to an existing Application) using the “cf push” command to upload the Application to the Cloud Foundry Endpoint. Cloud Foundry supports multiple “spaces”. Each space can be configured individually and can be used as different target types (for example, Dev, Test, and Production).
Since DeployHub is an agentless solution, it can deploy to Cloud Foundry by executing the ‘cf’ command automatically as part of the deployment process. DeployHub can extract your Application code from any Repository and push the changes up to the Cloud Foundry Endpoint. It can then track which version of the Application and associated Components is installed in which Cloud Foundry space.
Installing Cloud Foundry’s ‘cf’ Command Line Interface (cf CLI) on the same server as DeployHub allows the execution cf commands easily. An Application contains one or more Components, any one of which can have a Custom Action containing a cf command that targets the designated Clound Foundry Endpoint. For instance, a Cloud Foundry application named my_app could be started by writing a script named startMyApp.sh that looks like:
#!/bin/sh
cf api https://myexample.com
cf auth myname mypassword
cf target -o myorg -s myspace
cf push `my_app` -c null
If this script resides within the /scripts directory of the DeployHub installation, it can be called by putting “/scripts/startMyApp.sh” into a Procedure, placing it within an Action, and putting the name of the Action into a Component’s Custom Action field. Deploying the Application that contains the Component causes the Action to be called, which runs the script and starts the Cloud Foundry Application named my_app.
Cloud Foundry is ideal for Blue-Green deployment strategies. In such a scenario, Production is mirrored across two distinct environments — “blue” and “green”. End Users point to one of these Environments whilst deployments are made to the other. Once testing is complete on the deployed Environment, users are switched over to this Environment and the deployment is performed again to the other Environment. This maximizes uptime and minimizes the risks in performing a Rollback.
DeployHub supports such a Blue-Green deployment strategy. Both Environments can be targeted individually as part of two separate deployments or you can deploy to both with DeployHub deploying to the second Environment automatically following successful test and switch-over.
For more articles, please look here: https://www.deployhub.com/microservices-resources/
0 notes
deployhub-blog · 5 years ago
Text
Working with Helm and MicroservicesIntegrate Helm Into Your DeployHub Release
Helm helps with the process of creating your container image. It is an agentless solution that can be called via a DeployHub ‘Custom Action.’ Helm provides a broad set of pre-defined Helm “Charts.” A Helm Chart is a reusable script that simplifies the creation of your container image. When DeployHub executes the release process, it will call the Helm Chart you have defined as your Custom Action. What DeployHub offers is the versioning around what was released, including the version of the Helm Chart. DeployHub tracks all of the configuration of your release and tracks all changes to the configuration, including Helm.
Helm is called by DeployHub using ”Custom Action.” A Custom Action can replace the usual DeployHub deployment engine processing by calling an external script that performs its own deployment activities. Custom Actions can be used when you want an external tool to perform the delivery step of the deployment process. This will be the case for Helm.
Importing the DeployHub Helm Procedures
To use Helm, you will need to import the most current DeployHub Helm Procedures from GitHub. There will be two:
• WriteEnv2Toml.re — This Procedure takes all the attributes from DeployHub Environments, Applications, Endpoints and Components and writes them to a file readable by the Helm Procedure. • HelmUpgrade.re — This Procedure performs a Helm upgrade/install of the Helm Chart.
Download them from: https://github.com/DeployHubProject/DeployHub-Pro/tree/master/procedures
Once downloaded, you will need to import them into DeployHub as Procedures. To import these Procedures first login into DeployHub and select the Flows menu. Navigate to the Function & Procedures tab. Select your Domain, such as ‘Global Domain,’ and right click for the Menu. Choose “Import a Function or Procedure into this Domain”. Upload the two Procedures one at a time into the DeployHub database.
Creating a Custom Action for Helm
Once you have imported your Helm Procedures, you can define your Custom Action. Change to the Workflow tab on the right pane. Select your Domain and right click. This will give you the option to create a “New Action in this Domain.”
Name the new Action “HelmChart” (no spaces).
Now we are going to customize this Action. Go to the Workflow tab. You will see the ‘Activity Hub’ on the Right hand side of your screen. Navigate to your Domain to find the two Procedures. Drag them onto the area under Start. This will bring up the Dialog box to enter the parameters. No fields are required for WriteEnv2Toml.
Repeat the process for the HelmUpgrade Procedure and fill in the fields as follows:
Title: Not Required Summary: Not Required RspFile: $RspFile (The results from the WriteEnv2Toml.re Procedure) Chart: $(Chart) (The Helm Chart to be used during the deployment) Release Name: $(component.name) (The name of the Component)
At this point the Action is ready to be used by anyone with access (based on Domain and security options). Each Component that uses the Action will need to define specific values. Because this new Action is reusable, no Component variables are defined.
Assign the HelmChart Action to a Docker Component
For each Docker Component, you will need to define the variable values. Values are specified when you create a new Docker Component. These values will override those defined at the Application or Environment level. The values from DeployHub will be passed along to Helm’s values.yml file at execution time.
Docker component items have the following attributes, none of which are required:
BuildIdThe build ID from the build system such as Quay or DockerHubBuildUrlBuild URL for the build systemChartHelm chart for the componentChart VersionVersion of the Helm chartChart Name SpaceNamespace for the Helm chart to deploy toOperatorKubernetes OperatorDockerBuildDateTimestamp for the Docker BuildDockerShaSHA for the Docker ImageDockerRepoURL for the Docker RegistryGitCommitGit Commit that triggered the BuildGitRepoGit Repo NameGitTagGit Tag such as ‘Master’ or ‘v1.5.0’GitUrlURL to the Git RepositoryBuildNumberBuild Job Number for CI/CDBuild JobBuild Job name for CI/CDComponentTypeName of the Component TypeChangeRequestDSName of the Change Request DatasourceCategoryName of the Components CategoryAlwaysDeployY/NDeploySequentiallyY/NBaseDirectoryBase Directory for the ComponentPreActionName of the Pre-ActionPostActionName of the Post-ActionCustomActionName of the Custom-ActionSummaryComponent Summary or Description
0 notes
deployhub-blog · 5 years ago
Text
How to Migrate to Microservices
Migrating to Microservices may seem like a daunting task. That is because organizing your reusable components and services is key to a successful implementation of a service-based architecture. Because it is so key, it should be the first step you take in your modern architecture journey. This is often called Domain Driven Design (DDD). The good news is that you can start the process by looking at your current monolithic application in terms of components and Domains. From that perspective you can slowly and successfully see what migrating to microservices will look like. DeployHub supports both monolithic and microservice releases. It facilitates the migrating to microservices by leveraging the use of Domains. Domains are critical when you begin to decompose functions into independently deployable services.
To understand this first step in migrating to microservices, lets break down a monolithic into components. If we take a simple web store application, you may have a .jar file, the database, the infrastructure components and environment variable settings. While the .jar file may be your primary focus, you need to start seeing the database, infrastructure and environment variables as independent components of your overall application. You can build a strategy for managing each of these layers as individual services. DeployHub does this using Domains. Domains track, catalog and expose reusable components allowing them to be easily shared, a critical step in the shift to microservices.
Your monolithic Domains may include:
· Infrastructure Components (java runtime, Tomcat, Environment. Variables)
· Database and SQL (.sql)
· The Website (.jar)
To visualize this, we can use a triangle that shows the lowest to highest level of dependencies. This is a simple organizational method that can be useful for our more complex service-based architectures.
Monolithic Dependencies
With DeployHub, these Domains are controlled by different User Groups and can contain lower level Sub-Domains. For example, Operation Teams may control the Infrastructure Domain, DBAs the middle tier and developers control the website frontend. As you break down your application into services, you will expand on these Domains. Starting at this monolithic level helps you begin thinking ‘functions’ and prepares you for the next step.
Migrating Microservices with DeployHub Domains
One of the goals for Domain Driven Design is taking an application and breaking it down into its smaller parts. We then organize those parts in a fashion that will allow reuse.
If we think about the potential microservices for a simple store website, we’ll have: • the website frontend, • a cart, • credit card processing, • a check out system, • advertisements and recommendations, • sign-up, • login, • shipping, • taxes.
We start by expanding the triangle. Think about what’s most common. Like our monolithic, the lowest level is infrastructure and database. In our service-based architecture, these are followed by credit card processing and the login/sign up. Above that, we may have the cart and on top of that are the more specific pieces such as the ads and recommendations. At the peak of the triangle is the frontend.
Microservice Domains
This triangle shows how we build up from the most common pieces and represents how we would define Domains. Each section of our triangle becomes a Domain.
Domains and Sharing
Say we have another team that’s writing a store website, Website B. With DeployHub they find common pieces based on the Domains. Once you define your Domains, your developers can begin publishing and deploying microservices under the Domain structure making it possible for other teams to easily find and then reuse the services. When our next team creates a new store, they begin the process of re-use.
Shared Microservices
Between Website A and B we have accomplished a level of reuse for all the shareable layers from the infrastructure to the cart. This allows us to write the login, sign up, the cart, and the credit card processing once. DeployHub facilitates this level of design by supporting a highly customizable Domain structure allowing you to define what your triangle looks like and then Domains make it easy to find and share the microservices.
The OOPS in Object-Oriented Programming — Let’s Not Repeat that Mistake
Object Oriented Programming (OOPS) attempted to achieve a similar level of reuse. Part of the problem of OOPS was the inability for teams to accurately run builds where linking was carefully managed from a central ‘common’ location. To solve this problem, developers would add the ‘common’ object, like our login service, to their local repository so the build could use it. Their build would always reference that version of the library. Updates to the library would be private. There would be little, if any, contribution of updates to the ‘common’ version. This mistake could be easily repeated in microservices. That would be unfortunate and represent a failure in the implementation of a service-based architecture.
The question always is ‘how backward compatible is the new versions?’ Are we breaking any signatures on the methods? If we’re not breaking any of the interfaces, then can we consume those latest versions? And which application is consuming what version of a service?
For example, website B is going to use v2 of the login and website A will use v3. We need to track which Website is using which version. If you want to deprecate v2, it will require Website B move to v3. What is needed is a method for impact analysis to accurately make these types of decisions. This tracking ability is key to a successful microservice implementation and part of DeployHub’s Domain dependency management features. DeployHub provides this level of tracking, versioning and reporting to make every microservice deployment successful and visible for high frequency updates.
Further Reading on Domains
For more information on DeployHub’s Domains, refer to the User Guide Section on “Using Domains.”
More Information
Versioning your Deployment Configuration for a Single Source of Truth
Domain Driven Design using Microservices
Migrating to Microservices — Step 1: Define your Domains — Whitepaper
Microservices and Components
DeployHub’s Version Engine
Drive your Deployment Process using the Jenkins Continuous Deployment Plug-in
Track Component to Endpoint with a Feedback Loop
DeployHub and Jenkins — This Demo shows how DeployHub interacts with the Jenkins pipeline (including the blue ocean visualization).
DeployHub Team Sign-up — The hosted team version can be used to deploy to unlimited endpoints by unlimited users.
Get Involved in the OS Project — Help us create the best, open source continuous deployment platform available.
https://www.deployhub.com/jenkins-continuo…ployment-plug-in/
0 notes
deployhub-blog · 5 years ago
Video
youtube
How to Update DNS Server using DeployHub and Ansible. 
0 notes
deployhub-blog · 5 years ago
Video
youtube
How to Use CircleCI and DeployHub to drive continuous deployments, here you go. 
0 notes
deployhub-blog · 5 years ago
Video
youtube
Kubernetes Pipelines - Hello New World 3/20/19
Here’s a webinar our CEO Tracy Ragan put on this spring that explains the need for us all to shift to microservices. 
0 notes
deployhub-blog · 5 years ago
Video
youtube
DevOps Thought Leadership Meetup - July2019
If you’re wanting to understand how to migrate across from monolithic deployments, Steve explains how we do it. 
0 notes
deployhub-blog · 5 years ago
Text
Migrating to Microservices –  Take Baby Steps with Your Monolithic
Migrating to Microservices may seem like a daunting task. That is because organizing your reusable components and services is key to a successful implementation of a service-based architecture. Because it is so key, it should be the first step you take in your modern architecture journey.  This is often called Domain Driven Design (DDD).   The good news is that you can start the process by looking at your current monolithic application in terms of components and Domains.  From that perspective you can slowly and successfully see what migrating to microservices will look like. DeployHub supports both monolithic and microservice releases. It facilitates the migrating to microservices by leveraging the use of Domains.  Domains are critical when you begin to decompose functions into independently deployable services.
To understand this first step in migrating to microservices, lets break down a monolithic into components. If we take a simple web store application, you may have a .jar file, the database, the infrastructure components and environment variable settings.  While the .jar file may be your primary focus, you need to start seeing the database, infrastructure and environment variables as independent components of your overall application. You can build a strategy for managing each of these layers as individual services.  DeployHub does this using Domains.  Domains track, catalog and expose reusable components allowing them to be easily shared, a critical step in the shift to microservices.
Your monolithic Domains may include:
·         Infrastructure Components (java runtime, Tomcat, Environment. Variables)
·         Database and SQL (.sql)
·         The Website (.jar)
To visualize this, we can use a triangle that shows the lowest to highest level of dependencies.  This is a simple organizational method that can be useful for our more complex service-based architectures.
Monolithic Dependencies
With DeployHub, these Domains are controlled by different User Groups and can contain lower level Sub-Domains.  For example, Operation Teams may control the Infrastructure Domain, DBAs the middle tier and developers control the website frontend. As you break down your application into services, you will expand on these Domains.  Starting at this monolithic level helps you begin thinking ‘functions’ and prepares you for the next step.
Migrating Microservices with DeployHub Domains
One of the goals for Domain Driven Design is taking an application and breaking it down into its smaller parts. We then organize those parts in a fashion that will allow reuse.
If we think about the potential microservices for a simple store website, we’ll have: • the website frontend, • a cart, • credit card processing, • a check out system, • advertisements and recommendations, • sign-up, • login, • shipping, • taxes.
We start by expanding the triangle. Think about what’s most common. Like our monolithic, the lowest level is infrastructure and database. In our service-based architecture, these are followed by credit card processing and the login/sign up. Above that, we may have the cart and on top of that are the more specific pieces such as the ads and recommendations. At the peak of the triangle is the frontend.
Microservice Domains
This triangle shows how we build up from the most common pieces and represents how we would define Domains. Each section of our triangle becomes a Domain.
Domains and Sharing
Say we have another team that’s writing a store website, Website B. With DeployHub they find common pieces based on the Domains. Once you define your Domains, your developers can begin publishing and deploying microservices under the Domain structure making it possible for other teams to easily find and then reuse the services. When our next team creates a new store, they begin the process of re-use.
Shared Microservices
Between Website A and B we have accomplished a level of reuse for all the shareable layers from the infrastructure to the cart. This allows us to write the login, sign up, the cart, and the credit card processing once.  DeployHub facilitates this level of design by supporting a highly customizable Domain structure allowing you to define what your triangle looks like and then Domains make it easy to find and share the microservices.
The OOPS in Object-Oriented Programming – Let’s Not Repeat that Mistake
Object Oriented Programming (OOPS) attempted to achieve a similar level of reuse. Part of the problem of OOPS was the inability for teams to accurately run builds where linking was carefully managed from a central ‘common’ location. To solve this problem, developers would add the ‘common’ object, like our login service, to their local repository so the build could use it. Their build would always reference that version of the library. Updates to the library would be private. There would be little, if any, contribution of updates to the ‘common’ version. This mistake could be easily repeated in microservices. That would be unfortunate and represent a failure in the implementation of a service-based architecture.
The question always is ‘how backward compatible is the new versions?’ Are we breaking any signatures on the methods? If we’re not breaking any of the interfaces, then can we consume those latest versions? And which application is consuming what version of a service?
For example, website B is going to use v2 of the login and website A will use v3. We need to track which Website is using which version. If you want to deprecate v2, it will require Website B move to v3. What is needed is a method for impact analysis to accurately make these types of decisions. This tracking ability is key to a successful microservice implementation and part of DeployHub’s Domain dependency management features. DeployHub provides this level of tracking, versioning and reporting to make every microservice deployment successful and visible for high frequency updates.
Further Reading on Domains
For more information on DeployHub’s Domains, refer to the User Guide Section on “Using Domains.”
More Information
Versioning your Deployment Configuration for a Single Source of Truth
Domain Driven Design using Microservices
Migrating to Microservices – Step 1: Define your Domains – Whitepaper
Microservices and Components
DeployHub’s Version Engine
Drive your Deployment Process using the Jenkins Continuous Deployment Plug-in
Track Component to Endpoint with a Feedback Loop
DeployHub and Jenkins – This Demo shows how DeployHub interacts with the Jenkins pipeline (including the blue ocean visualization).
DeployHub Team Sign-up –  The hosted team version can be used to deploy to unlimited endpoints by unlimited users.
Get Involved in the OS Project – Help us create the best, open source continuous deployment platform available.
https://www.deployhub.com/jenkins-continuo…ployment-plug-in/
0 notes
deployhub-blog · 5 years ago
Photo
Tumblr media
If you need help with managing the shifting arhcitecture, we have a suggestion for you. I’ll post the article in a minute. 
0 notes