#azure logic apps vs power automate
Explore tagged Tumblr posts
Text
Understanding Substrings in Power Automate and Azure Logic Apps.
In the present advanced age, robotization devices are fundamental for smoothing out work processes and further developing proficiency. Two well known apparatuses for mechanization inside the Microsoft environment are Power Automate and Azure Logic Apps. The two apparatuses offer strong abilities for planning automated work processes, yet they take care of marginally various requirements and situations. This article will investigate how substrings are dealt with in Power Automate and Azure Logic AppsF, assisting you with understanding which device may be the most appropriate for your necessities.
Substrings in Power Automate
Power Automate, previously known as Microsoft Stream, is an easy to use instrument intended for robotizing errands and cycles across different applications. One of its key highlights is its capacity to control text, incorporating working with substrings. substring power automate Substrings are essentially parcels of a bigger string, and controlling them can be urgent for errands, for example, information extraction or message designing.
Central issues:
UI: Power Automate gives a no-code interface where you can make streams utilizing a visual originator. This makes it available for clients who may not be know all about programming.
String Control Activities: In Power Automate, you can perform substring tasks utilizing activities like "Create" and articulations. These activities permit you to remove, supplant, and control portions of strings without composing complex code.
Model Use Case: On the off chance that you have a string like "OrderID:12345" and you want to separate the mathematical part, Power Automate's string capabilities let you determine the specific position and length of the substring you need to remove.
Substrings in Azure Logic Apps
Azure Logic Apps is a further developed and flexible instrument intended for building complex work processes and reconciliations in the cloud. It is in many cases utilized in big business situations where versatility and high level coordination capacities are required. Azure Logic Apps likewise upholds substring tasks, however it works inside a more engineer situated climate.
Central issues:
Designer Adaptability: Azure Logic Apps offers a more extensive scope of connectors and combination choices, taking care of more intricate computerization needs. Its visual creator is strong yet may be less natural contrasted with Power Automate.
High level String Capabilities: Azure Logic Apps gives a scope of implicit capabilities for controlling strings. This incorporates substring tasks where you can determine the beginning position and length of the substring automatically.
Model Use Case: For a more perplexing situation, for example, separating a substring from a unique substance field or from a reaction in a HTTP demand, Azure Logic Apps considers more granular control and high level articulations to deal with these undertakings.
Looking at Power Automate and Azure Logic Apps
While both Power Automate and Azure Logic Apps can deal with substring activities actually, the decision between them frequently boils down to the particular requirements of your task:
Power Automate is great for clients who lean toward a low-code or no-code approach with an instinctive connection point for less complex robotization needs.
Azure Logic Apps is more qualified for situations that require progressed combinations, complex work processes, and more prominent command over robotization processes.
End
Both Power Automate and Azure Logic Apps offer powerful abilities for working with substrings and robotizing work processes. Power Automate succeeds in ease of use and effortlessness, going with it an extraordinary decision for direct errands. Azure Logic Apps, then again, gives further developed highlights and adaptability, taking special care of complex and venture level robotization needs.
0 notes
Text
Kubernetes vs. Docker: What Does It Really Mean?
“Kubernetes vs. Docker” is a phrase that you hear more and more these days as Kubernetes becomes ever more popular as a container orchestration solution.
However, “Kubernetes vs. Docker” is also a somewhat misleading phrase. When you break it down, these words don’t mean what many people intend them to mean, because Docker and Kubernetes aren’t direct competitors. Docker is a containerization platform, and Kubernetes is a container orchestrator for container platforms like Docker.
This post aims to clear up some common confusion surrounding Kubernetes and Docker, and explain what people really mean when they talk about “Docker vs. Kubernetes.”
[EBOOK] Kubernetes Observability
Learn how how to monitor, troubleshoot, and secure your Kubernetes environment with Sumo Logic.
Get eBook
The Rise of Containerization and Docker
It is impossible to talk about Docker without first exploring containers. Containers solve a critical issue in the life of application development. When developers are writing code they are working on their own local development environment. When they are ready to move that code to production this is where problems arise. The code that worked perfectly on their machine doesn’t work in production. The reasons for this are varied; different operating system, different dependencies, different libraries.
Containers solved this critical issue of portability allowing you to separate code from the underlying infrastructure it is running on. Developers could package up their application, including all of the bins and libraries it needs to run correctly, into a small container image. In production that container can be run on any computer that has a conterization platform.
Advantages of Containers
In addition to solving the major challenge of portability, containers and container platforms provide many advantages over traditional virtualization.
Containers have an extremely small footprint. The container just needs its application and a definition of all of the bins and libraries it requires to run. Unlike VMs which each have a complete copy of a guest operating system, container isolation is done on the kernel level without the need for a guest operating system. In addition, libraries can be across containers, so it eliminates the need to have 10 copies of the same library on a server, further saving space. If I have 3 apps all running node and express, I don't have to have 3 instances of node and express, those apps can share those bins and libraries. Allowing for applications to become encapsulated in self-contained environments allows for quicker deployments, closer parity between development environments, and infinite scalability.
What is Docker?
Docker is currently the most popular container platform. Docker appeared on the market at the right time, and was open source from the beginning, which likely led to its current market domination. 30% of enterprises currently use Docker in their AWS environment and that number continues to grow.
When most people talk about Docker they are talking about Docker Engine, the runtime that allows you to build and run containers. But before you can run a Docker container they must be built, starting with a Docker File. The Docker File defines everything needed to run the image including the OS network specifications, and file locations. Now that you have a Docker file, you can build a Docker Image which is the portable, static component that gets run on the Docker Engine. And if you don’t want to start from scratch Docker even has a service called Docker Hub, where you can store and share images.
The Need for Orchestration Systems
While Docker provided an open standard for packaging and distributing containerized applications, there arose a new problem. How would all of these containers be coordinated and scheduled? How do you seamlessly upgrade an application without any interruption of service? How do you monitor the health of an application, know when something goes wrong and seamlessly restart it?
Solutions for orchestrating containers soon emerged. Kubernetes, Mesos, and Docker Swarm are some of the more popular options for providing an abstraction to make a cluster of machines behave like one big machine, which is vital in a large-scale environment.
When most people talk about “Kubernetes vs. Docker,” what they really mean is “Kubernetes vs. Docker Swarm.” The latter is Docker’s own native clustering solution for Docker containers, which has the advantage of being tightly integrated into the ecosystem of Docker, and uses its own API. Like most schedulers, Docker Swarm provides a way to administer a large number of containers spread across clusters of servers. Its filtering and scheduling system enables the selection of optimal nodes in a cluster to deploy containers.
Kubernetes is the container orchestrator that was developed at Google which has been donated to the CNCF and is now open source. It has the advantage of leveraging Google’s years of expertise in container management. It is a comprehensive system for automating deployment, scheduling and scaling of containerized applications, and supports many containerization tools such as Docker.
For now, Kubernetes is the market leader and the standardized means of orchestrating containers and deploying distributed applications. Kubernetes can be run on a public cloud service or on-premises, is highly modular, open source, and has a vibrant community. Companies of all sizes are investing into it, and many cloud providers offer Kubernetes as a service. Sumo Logic provides support for all orchestration technologies, including Kubernetes-powered applications.
How does Kubernetes work?
It is easy to get lost in the details of Kubernetes, but at the end of the day, what Kubernetes is doing is pretty simple. Cheryl Hung of the CNCF describes Kubernetes as a control loop. Declare how you want your system to look (3 copies of container image a and 2 copies of container image b) and Kubernetes makes that happen. Kubernetes compares the desired state to the actual state, and if they aren’t the same, it takes steps to correct it.
Kubernetes architecture and components
Kubernetes is made up many components that do not know are care about each other. The components all talk to each other through the API server. Each of these components operates its own function and then exposes metrics, that we can collect for monitoring later on. We can break down the components into three main parts.
The Control Plane - The Master.
Nodes - Where pods get scheduled.
Pods - Holds containers.
The Control Plane - The Master Node
The control plane is the orchestrator. Kubernetes is an orchestration platform, and the control plane facilitates that orchestration. There are multiple components in the control plane that help facilitate that orchestration. Etcd for storage, the API server for communication between components, the scheduler which decides which nodes pods should run on, and the controller manager, responsible for checking the current state against the desired state.
Nodes
Nodes make up the collective compute power of the Kubernetes cluster. This is where containers actually get deployed to run. Nodes are the physical infrastructure that your application runs on, the server of VMs in your environment.
Pods
Pods are the lowest level resource in the Kubernetes cluster. A pod is made up of one or more containers, but most commonly just a single container. When defining your cluster, limits are set for pods which define what resources, CPU and memory, they need to run. The scheduler uses this definition to decide on which nodes to place the pods. If there is more than one container in a pod, it is difficult to estimate the required resources and the scheduler will not be able to appropriately place pods.
How Does Kubernetes Relate to Docker?
Kubernetes and Docker are both comprehensive de-facto solutions to intelligently manage containerized applications and provide powerful capabilities, and from this some confusion has emerged. “Kubernetes” is now sometimes used as a shorthand for an entire container environment based on Kubernetes. In reality, they are not directly comparable, have different roots, and solve for different things.
Docker is a platform and tool for building, distributing, and running Docker containers. It offers its own native clustering tool that can be used to orchestrate and schedule containers on machine clusters. Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker Swarm and is meant to coordinate clusters of nodes at scale in production in an efficient manner. It works around the concept of pods, which are scheduling units (and can contain one or more containers) in the Kubernetes ecosystem, and they are distributed among nodes to provide high availability. One can easily run a Docker build on a Kubernetes cluster, but Kubernetes itself is not a complete solution and is meant to include custom plugins.
Kubernetes and Docker are both fundamentally different technologies but they work very well together, and both facilitate the management and deployment of containers in a distributed architecture.
Can you use Docker without Kubernetes?
Docker is commonly used without Kubernetes, in fact this is the norm. While Kubernetes offers many benefits, it is notoriously complex and there are many scenarios where the overhead of spinning up Kubernetes is unnecessary or unwanted.
In development environments it is common to use Docker without a container orchestrator like Kubernetes. In production environments often the benefits of using a container orchestrator do not outweigh the cost of added complexity. Additionally, many public cloud services like AWS, GCP, and Azure provide some orchestration capabilities making the tradeoff of the added complexity unnecessary.[Source]-https://www.sumologic.com/blog/kubernetes-vs-docker/
Beginners & Advanced level Docker Training in Mumbai. Asterix Solution's 25 Hour Docker Training gives broad hands-on practicals.
0 notes
Link
Zack Kanter Contributor
Zack Kanter is the co-founder of Stedi.
More posts by this contributor
Why Amazon is eating the world
While serverless is typically championed as a way to reduce costs and scale massively on demand, there is one extraordinarily compelling reason above all others to adopt a serverless-first approach: it is the best way to achieve maximum development velocity over time. It is not easy to implement correctly and is certainly not a cure-all, but, done right, it paves an extraordinary path to maximizing development velocity, and it is because of this that serverless is the most under-hyped, under-discussed tech movement amongst founders and investors today.
The case for serverless starts with a simple premise: if the fastest startup in a given market is going to win, then the most important thing is to maintain or increase development velocity over time. This may sound obvious, but very, very few startups state maintaining or increasing development velocity as an explicit goal.
“Development velocity,” to be specific, means the speed at which you can deliver an additional unit of value to a customer. Of course, an additional unit of customer value can be delivered either by shipping more value to existing customers, or by shipping existing value—that is, existing features—to new customers.
For many tech startups, particularly in the B2B space, both of these are gated by development throughput (the former for obvious reasons, and the latter because new customer onboarding is often limited by onboarding automation that must be built by engineers). What does serverless mean, exactly? It’s a bit of a misnomer. Just as cloud computing didn’t mean that data centers disappeared into the ether — it meant that those data centers were being run by someone else, and servers could be provisioned on-demand and paid for by the hour — serverless doesn’t mean that there aren’t any servers.
There always have to be servers somewhere. Broadly, serverless means that you aren’t responsible for all of the configuration and management of those servers. A good definition of serverless is pay-per-use computing where uptime is out of the developer’s control. With zero usage, there is zero cost. And if the service goes down, you are not responsible for getting it back up. AWS started the serverless movement in 2014 with a “serverless compute�� platform called AWS Lambda.
Whereas a ‘normal’ cloud server like AWS’s EC2 offering had to be provisioned in advance and was billed by the hour regardless of whether or not it was used, AWS Lambda was provisioned instantly, on demand, and was billed only per request. Lambda is astonishingly cheap: $0.0000002 per request plus $0.00001667 per gigabyte-second of compute. And while users have to increase their server size if they hit a capacity constraint on EC2, Lambda will scale more or less infinitely to accommodate load — without any manual intervention. And, if an EC2 instance goes down, the developer is responsible for diagnosing the problem and getting it back online, whereas if a Lambda dies another Lambda can just take its place.
Although Lambda—and equivalent services like Azure Functions or Google Cloud Functions—is incredibly attractive from a cost and capacity standpoint, the truth is that saving money and preparing for scale are very poor reasons for a startup to adopt a given technology. Few startups fail as a result of spending too much money on servers or from failing to scale to meet customer demand — in fact, optimizing for either of these things is a form of premature scaling, and premature scaling on one or many dimensions (hiring, marketing, sales, product features, and even hierarchy/titles) is the primary cause of death for the vast majority of startups. In other words, prematurely optimizing for cost, scale, or uptime is an anti-pattern.
When people talk about a serverless approach, they don’t just mean taking the code that runs on servers and chopping it up into Lambda functions in order to achieve lower costs and easier scaling. A proper serverless architecture is a radically different way to build a modern software application — a method that has been termed a serverless, service-full approach.
It starts with the aggressive adoption of off-the-shelf platforms—that is, managed services—such as AWS Cognito or Auth0 (user authentication—sign up and sign in—as-a-service), AWS Step Functions or Azure Logic Apps (workflow-orchestration-as-a-service), AWS AppSync (GraphQL backend-as-a-service), or even more familiar services like Stripe.
Whereas Lambda-like offerings provide functions as a service, managed services provide functionality as a service. The distinction, in other words, is that you write and maintain the code (e.g., the functions) for serverless compute, whereas the provider writes and maintains the code for managed services. With managed services, the platform is providing both the functionality and managing the operational complexity behind it.
By adopting managed services, the vast majority of an application’s “commodity” functionality—authentication, file storage, API gateway, and more—is handled by the cloud provider’s various off-the-shelf platforms, which are stitched together with a thin layer of your own ‘glue’ code. The glue code — along with the remaining business logic that makes your application unique — runs on ultra-cheap, infinitely-scalable Lambda (or equivalent) infrastructure, thereby eliminating the need for servers altogether. Small engineering teams like ours are using it to build incredibly powerful, easily-maintainable applications in an architecture that yields an unprecedented, sustainable development velocity as the application gets more complex.
There is a trade-off to adopting the serverless, service-full philosophy. Building a radically serverless application requires taking an enormous hit to short term development velocity, since it is often much, much quicker to build a “service” than it is to use one of AWS’s off-the-shelf. When developers are considering a service like Stripe, “build vs buy” isn’t even a question—it is unequivocally faster to use Stripe’s payment service than it is to build a payment service yourself. More accurately, it is faster to understand Stripe’s model for payments than it is to understand and build a proprietary model for payments—a testament both to the complexity of the payment space and to the intuitive service that Stripe has developed.
But for developers dealing with something like authentication (Cognito or Auth0) or workflow orchestration (AWS Step Functions or Azure Logic Apps), it is generally slower to understand and implement the provider’s model for a service than it is to implement the functionality within the application’s codebase (either by writing it from scratch or by using an open source library). By choosing to use a managed service, developers are deliberately choosing to go slower in the short term—a tough pill for a startup to swallow. Many, understandably, choose to go fast now and roll their own.
The problem with this approach comes back to an old axiom in software development: “code isn’t an asset—code is debt.” Code requires an entry on both sides of the accounting equation. It is an asset that enables companies to deliver value to the customer, but it also requires maintenance that has to be accounted for and distributed over time. All things equal, startups want the smallest codebase possible (provided, of course, that developers aren’t taking this too far and writing clever but unreadable code). Less code means less surface area to maintain, and also means less surface area for new engineers to grasp during ramp-up.
Herein lies the magic of using managed services. Startups get the beneficial use of the provider’s code as an asset without holding that code debt on their “technical balance sheet.” Instead, the code sits on the provider’s balance sheet, and the provider’s engineers are tasked with maintaining, improving, and documenting that code. In other words, startups get code that is self-maintaining, self-improving, and self-documenting—the equivalent of hiring a first-rate engineering team dedicated to a non-core part of the codebase—for free. Or, more accurately, at a predictable per-use cost. Contrast this with using a managed service like Cognito or Auth0. On day one, perhaps it doesn’t have all of the features on a startup’s wish list. The difference is that the provider has a team of engineers and product managers whose sole task is to ship improvements to this service day in and day out. Their exciting core product is another company’s would-be redheaded stepchild.
If there is a single unifying principle amongst a startup’s engineering team, it should be to write as little code—and be responsible for as few non-core services—as humanly possible. By adopting this philosophy, a startup can build a platform that can process billions of transactions at an extremely predictable, purely-variable cost with nearly zero devops oversight.
Being this lazy takes a surprising amount of discipline. Getting good at managing a serverless codebase and serverless infrastructure is nontrivial. It means building extensive practices around testing and automation, which means an even larger upfront time investment. Integrating with a managed service can be unbelievably painful, with days spent trying to understand all of the gaps, gotchas, and edge cases. The temptation to implement a proprietary solution can be incredible, especially when it means a story can be done in a matter of minutes or hours instead of days or longer.
It means writing wonky workarounds when a service only accommodates 80% of a developer’s needs. And as the missing 20% of functionality is released, it means refactoring code to remove the workaround, even when it is working just fine and there is no near-term benefit to changing it. The substantial early time investment means that a serverless/managed-service-first approach is not right for every startup. The most important question to ask is, over what time scale do we need to be fast? If the answer is days or weeks, as is the case for many very early-stage startups, it is probably not the right approach.
But if the timescale for velocity optimization has shifted from days or weeks to months or years, it is worth taking a close look at going serverless.
Recruiting great engineers is extraordinarily hard—and only getting harder. It is a tremendous competitive advantage to task those engineers with building differentiated business functionality while your competitors build services that do commoditized, undifferentiated heavy lifting, and then remain stuck with the maintenance of those services for years to come. Of course, there are certain cases where serverless just doesn’t make sense, but those are disappearing at a rapid rate (for example, Lambda’s 5-minute timeout was recently tripled to 15 minutes)—and reasons such as lock-in or latency are generally nonsense or a thing of the past.
Ultimately, the job of a software startup—and therefore the job of the founder—is to deliver customer value above and beyond the capability of the competition. That job comes down to maximizing development velocity, which, in turn, comes down to mitigating complexity wherever possible. It may be that every codebase, and therefore every startup, is destined to become “a big ball of mud”—the term coined in a 1997 paper to describe the “haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire, spaghetti-code jungle” that every software project seems eventually destined to become.
One day, complexity will grow past a breaking point and development velocity will begin to decline irreversibly, and so the ultimate job of the founder is to push that day off as long as humanly possible. The best way to do that is to keep your ball of mud to the minimum possible size— serverless is the most powerful tool ever developed to do exactly that.
via TechCrunch
0 notes
Text
The business case for serverless
Zack Kanter Contributor
Zack Kanter is the co-founder of Stedi.
More posts by this contributor
Why Amazon is eating the world
While serverless is typically championed as a way to reduce costs and scale massively on demand, there is one extraordinarily compelling reason above all others to adopt a serverless-first approach: it is the best way to achieve maximum development velocity over time. It is not easy to implement correctly and is certainly not a cure-all, but, done right, it paves an extraordinary path to maximizing development velocity, and it is because of this that serverless is the most under-hyped, under-discussed tech movement amongst founders and investors today.
The case for serverless starts with a simple premise: if the fastest startup in a given market is going to win, then the most important thing is to maintain or increase development velocity over time. This may sound obvious, but very, very few startups state maintaining or increasing development velocity as an explicit goal.
“Development velocity,” to be specific, means the speed at which you can deliver an additional unit of value to a customer. Of course, an additional unit of customer value can be delivered either by shipping more value to existing customers, or by shipping existing value—that is, existing features—to new customers.
For many tech startups, particularly in the B2B space, both of these are gated by development throughput (the former for obvious reasons, and the latter because new customer onboarding is often limited by onboarding automation that must be built by engineers). What does serverless mean, exactly? It’s a bit of a misnomer. Just as cloud computing didn’t mean that data centers disappeared into the ether — it meant that those data centers were being run by someone else, and servers could be provisioned on-demand and paid for by the hour — serverless doesn’t mean that there aren’t any servers.
There always have to be servers somewhere. Broadly, serverless means that you aren’t responsible for all of the configuration and management of those servers. A good definition of serverless is pay-per-use computing where uptime is out of the developer’s control. With zero usage, there is zero cost. And if the service goes down, you are not responsible for getting it back up. AWS started the serverless movement in 2014 with a “serverless compute” platform called AWS Lambda.
Whereas a ‘normal’ cloud server like AWS’s EC2 offering had to be provisioned in advance and was billed by the hour regardless of whether or not it was used, AWS Lambda was provisioned instantly, on demand, and was billed only per request. Lambda is astonishingly cheap: $0.0000002 per request plus $0.00001667 per gigabyte-second of compute. And while users have to increase their server size if they hit a capacity constraint on EC2, Lambda will scale more or less infinitely to accommodate load — without any manual intervention. And, if an EC2 instance goes down, the developer is responsible for diagnosing the problem and getting it back online, whereas if a Lambda dies another Lambda can just take its place.
Although Lambda—and equivalent services like Azure Functions or Google Cloud Functions—is incredibly attractive from a cost and capacity standpoint, the truth is that saving money and preparing for scale are very poor reasons for a startup to adopt a given technology. Few startups fail as a result of spending too much money on servers or from failing to scale to meet customer demand — in fact, optimizing for either of these things is a form of premature scaling, and premature scaling on one or many dimensions (hiring, marketing, sales, product features, and even hierarchy/titles) is the primary cause of death for the vast majority of startups. In other words, prematurely optimizing for cost, scale, or uptime is an anti-pattern.
When people talk about a serverless approach, they don’t just mean taking the code that runs on servers and chopping it up into Lambda functions in order to achieve lower costs and easier scaling. A proper serverless architecture is a radically different way to build a modern software application — a method that has been termed a serverless, service-full approach.
It starts with the aggressive adoption of off-the-shelf platforms—that is, managed services—such as AWS Cognito or Auth0 (user authentication—sign up and sign in—as-a-service), AWS Step Functions or Azure Logic Apps (workflow-orchestration-as-a-service), AWS AppSync (GraphQL backend-as-a-service), or even more familiar services like Stripe.
Whereas Lambda-like offerings provide functions as a service, managed services provide functionality as a service. The distinction, in other words, is that you write and maintain the code (e.g., the functions) for serverless compute, whereas the provider writes and maintains the code for managed services. With managed services, the platform is providing both the functionality and managing the operational complexity behind it.
By adopting managed services, the vast majority of an application’s “commodity” functionality—authentication, file storage, API gateway, and more—is handled by the cloud provider’s various off-the-shelf platforms, which are stitched together with a thin layer of your own ‘glue’ code. The glue code — along with the remaining business logic that makes your application unique — runs on ultra-cheap, infinitely-scalable Lambda (or equivalent) infrastructure, thereby eliminating the need for servers altogether. Small engineering teams like ours are using it to build incredibly powerful, easily-maintainable applications in an architecture that yields an unprecedented, sustainable development velocity as the application gets more complex.
There is a trade-off to adopting the serverless, service-full philosophy. Building a radically serverless application requires taking an enormous hit to short term development velocity, since it is often much, much quicker to build a “service” than it is to use one of AWS’s off-the-shelf. When developers are considering a service like Stripe, “build vs buy” isn’t even a question—it is unequivocally faster to use Stripe’s payment service than it is to build a payment service yourself. More accurately, it is faster to understand Stripe’s model for payments than it is to understand and build a proprietary model for payments—a testament both to the complexity of the payment space and to the intuitive service that Stripe has developed.
But for developers dealing with something like authentication (Cognito or Auth0) or workflow orchestration (AWS Step Functions or Azure Logic Apps), it is generally slower to understand and implement the provider’s model for a service than it is to implement the functionality within the application’s codebase (either by writing it from scratch or by using an open source library). By choosing to use a managed service, developers are deliberately choosing to go slower in the short term—a tough pill for a startup to swallow. Many, understandably, choose to go fast now and roll their own.
The problem with this approach comes back to an old axiom in software development: “code isn’t an asset—code is debt.” Code requires an entry on both sides of the accounting equation. It is an asset that enables companies to deliver value to the customer, but it also requires maintenance that has to be accounted for and distributed over time. All things equal, startups want the smallest codebase possible (provided, of course, that developers aren’t taking this too far and writing clever but unreadable code). Less code means less surface area to maintain, and also means less surface area for new engineers to grasp during ramp-up.
Herein lies the magic of using managed services. Startups get the beneficial use of the provider’s code as an asset without holding that code debt on their “technical balance sheet.” Instead, the code sits on the provider’s balance sheet, and the provider’s engineers are tasked with maintaining, improving, and documenting that code. In other words, startups get code that is self-maintaining, self-improving, and self-documenting—the equivalent of hiring a first-rate engineering team dedicated to a non-core part of the codebase—for free. Or, more accurately, at a predictable per-use cost. Contrast this with using a managed service like Cognito or Auth0. On day one, perhaps it doesn’t have all of the features on a startup’s wish list. The difference is that the provider has a team of engineers and product managers whose sole task is to ship improvements to this service day in and day out. Their exciting core product is another company’s would-be redheaded stepchild.
If there is a single unifying principle amongst a startup’s engineering team, it should be to write as little code—and be responsible for as few non-core services—as humanly possible. By adopting this philosophy, a startup can build a platform that can process billions of transactions at an extremely predictable, purely-variable cost with nearly zero devops oversight.
Being this lazy takes a surprising amount of discipline. Getting good at managing a serverless codebase and serverless infrastructure is nontrivial. It means building extensive practices around testing and automation, which means an even larger upfront time investment. Integrating with a managed service can be unbelievably painful, with days spent trying to understand all of the gaps, gotchas, and edge cases. The temptation to implement a proprietary solution can be incredible, especially when it means a story can be done in a matter of minutes or hours instead of days or longer.
It means writing wonky workarounds when a service only accommodates 80% of a developer’s needs. And as the missing 20% of functionality is released, it means refactoring code to remove the workaround, even when it is working just fine and there is no near-term benefit to changing it. The substantial early time investment means that a serverless/managed-service-first approach is not right for every startup. The most important question to ask is, over what time scale do we need to be fast? If the answer is days or weeks, as is the case for many very early-stage startups, it is probably not the right approach.
But if the timescale for velocity optimization has shifted from days or weeks to months or years, it is worth taking a close look at going serverless.
Recruiting great engineers is extraordinarily hard—and only getting harder. It is a tremendous competitive advantage to task those engineers with building differentiated business functionality while your competitors build services that do commoditized, undifferentiated heavy lifting, and then remain stuck with the maintenance of those services for years to come. Of course, there are certain cases where serverless just doesn’t make sense, but those are disappearing at a rapid rate (for example, Lambda’s 5-minute timeout was recently tripled to 15 minutes)—and reasons such as lock-in or latency are generally nonsense or a thing of the past.
Ultimately, the job of a software startup—and therefore the job of the founder—is to deliver customer value above and beyond the capability of the competition. That job comes down to maximizing development velocity, which, in turn, comes down to mitigating complexity wherever possible. It may be that every codebase, and therefore every startup, is destined to become “a big ball of mud”—the term coined in a 1997 paper to describe the “haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire, spaghetti-code jungle” that every software project seems eventually destined to become.
One day, complexity will grow past a breaking point and development velocity will begin to decline irreversibly, and so the ultimate job of the founder is to push that day off as long as humanly possible. The best way to do that is to keep your ball of mud to the minimum possible size— serverless is the most powerful tool ever developed to do exactly that.
Via Jonathan Shieber https://techcrunch.com
0 notes
Text
The business case for serverless
Zack Kanter Contributor
Zack Kanter is the co-founder of Stedi.
More posts by this contributor
Why Amazon is eating the world
While serverless is typically championed as a way to reduce costs and scale massively on demand, there is one extraordinarily compelling reason above all others to adopt a serverless-first approach: it is the best way to achieve maximum development velocity over time. It is not easy to implement correctly and is certainly not a cure-all, but, done right, it paves an extraordinary path to maximizing development velocity, and it is because of this that serverless is the most under-hyped, under-discussed tech movement amongst founders and investors today.
The case for serverless starts with a simple premise: if the fastest startup in a given market is going to win, then the most important thing is to maintain or increase development velocity over time. This may sound obvious, but very, very few startups state maintaining or increasing development velocity as an explicit goal.
“Development velocity,” to be specific, means the speed at which you can deliver an additional unit of value to a customer. Of course, an additional unit of customer value can be delivered either by shipping more value to existing customers, or by shipping existing value—that is, existing features—to new customers.
For many tech startups, particularly in the B2B space, both of these are gated by development throughput (the former for obvious reasons, and the latter because new customer onboarding is often limited by onboarding automation that must be built by engineers). What does serverless mean, exactly? It’s a bit of a misnomer. Just as cloud computing didn’t mean that data centers disappeared into the ether — it meant that those data centers were being run by someone else, and servers could be provisioned on-demand and paid for by the hour — serverless doesn’t mean that there aren’t any servers.
There always have to be servers somewhere. Broadly, serverless means that you aren’t responsible for all of the configuration and management of those servers. A good definition of serverless is pay-per-use computing where uptime is out of the developer’s control. With zero usage, there is zero cost. And if the service goes down, you are not responsible for getting it back up. AWS started the serverless movement in 2014 with a “serverless compute” platform called AWS Lambda.
Whereas a ‘normal’ cloud server like AWS’s EC2 offering had to be provisioned in advance and was billed by the hour regardless of whether or not it was used, AWS Lambda was provisioned instantly, on demand, and was billed only per request. Lambda is astonishingly cheap: $0.0000002 per request plus $0.00001667 per gigabyte-second of compute. And while users have to increase their server size if they hit a capacity constraint on EC2, Lambda will scale more or less infinitely to accommodate load — without any manual intervention. And, if an EC2 instance goes down, the developer is responsible for diagnosing the problem and getting it back online, whereas if a Lambda dies another Lambda can just take its place.
Although Lambda—and equivalent services like Azure Functions or Google Cloud Functions—is incredibly attractive from a cost and capacity standpoint, the truth is that saving money and preparing for scale are very poor reasons for a startup to adopt a given technology. Few startups fail as a result of spending too much money on servers or from failing to scale to meet customer demand — in fact, optimizing for either of these things is a form of premature scaling, and premature scaling on one or many dimensions (hiring, marketing, sales, product features, and even hierarchy/titles) is the primary cause of death for the vast majority of startups. In other words, prematurely optimizing for cost, scale, or uptime is an anti-pattern.
When people talk about a serverless approach, they don’t just mean taking the code that runs on servers and chopping it up into Lambda functions in order to achieve lower costs and easier scaling. A proper serverless architecture is a radically different way to build a modern software application — a method that has been termed a serverless, service-full approach.
It starts with the aggressive adoption of off-the-shelf platforms—that is, managed services—such as AWS Cognito or Auth0 (user authentication—sign up and sign in—as-a-service), AWS Step Functions or Azure Logic Apps (workflow-orchestration-as-a-service), AWS AppSync (GraphQL backend-as-a-service), or even more familiar services like Stripe.
Whereas Lambda-like offerings provide functions as a service, managed services provide functionality as a service. The distinction, in other words, is that you write and maintain the code (e.g., the functions) for serverless compute, whereas the provider writes and maintains the code for managed services. With managed services, the platform is providing both the functionality and managing the operational complexity behind it.
By adopting managed services, the vast majority of an application’s “commodity” functionality—authentication, file storage, API gateway, and more—is handled by the cloud provider’s various off-the-shelf platforms, which are stitched together with a thin layer of your own ‘glue’ code. The glue code — along with the remaining business logic that makes your application unique — runs on ultra-cheap, infinitely-scalable Lambda (or equivalent) infrastructure, thereby eliminating the need for servers altogether. Small engineering teams like ours are using it to build incredibly powerful, easily-maintainable applications in an architecture that yields an unprecedented, sustainable development velocity as the application gets more complex.
There is a trade-off to adopting the serverless, service-full philosophy. Building a radically serverless application requires taking an enormous hit to short term development velocity, since it is often much, much quicker to build a “service” than it is to use one of AWS’s off-the-shelf. When developers are considering a service like Stripe, “build vs buy” isn’t even a question—it is unequivocally faster to use Stripe’s payment service than it is to build a payment service yourself. More accurately, it is faster to understand Stripe’s model for payments than it is to understand and build a proprietary model for payments—a testament both to the complexity of the payment space and to the intuitive service that Stripe has developed.
But for developers dealing with something like authentication (Cognito or Auth0) or workflow orchestration (AWS Step Functions or Azure Logic Apps), it is generally slower to understand and implement the provider’s model for a service than it is to implement the functionality within the application’s codebase (either by writing it from scratch or by using an open source library). By choosing to use a managed service, developers are deliberately choosing to go slower in the short term—a tough pill for a startup to swallow. Many, understandably, choose to go fast now and roll their own.
The problem with this approach comes back to an old axiom in software development: “code isn’t an asset—code is debt.” Code requires an entry on both sides of the accounting equation. It is an asset that enables companies to deliver value to the customer, but it also requires maintenance that has to be accounted for and distributed over time. All things equal, startups want the smallest codebase possible (provided, of course, that developers aren’t taking this too far and writing clever but unreadable code). Less code means less surface area to maintain, and also means less surface area for new engineers to grasp during ramp-up.
Herein lies the magic of using managed services. Startups get the beneficial use of the provider’s code as an asset without holding that code debt on their “technical balance sheet.” Instead, the code sits on the provider’s balance sheet, and the provider’s engineers are tasked with maintaining, improving, and documenting that code. In other words, startups get code that is self-maintaining, self-improving, and self-documenting—the equivalent of hiring a first-rate engineering team dedicated to a non-core part of the codebase—for free. Or, more accurately, at a predictable per-use cost. Contrast this with using a managed service like Cognito or Auth0. On day one, perhaps it doesn’t have all of the features on a startup’s wish list. The difference is that the provider has a team of engineers and product managers whose sole task is to ship improvements to this service day in and day out. Their exciting core product is another company’s would-be redheaded stepchild.
If there is a single unifying principle amongst a startup’s engineering team, it should be to write as little code—and be responsible for as few non-core services—as humanly possible. By adopting this philosophy, a startup can build a platform that can process billions of transactions at an extremely predictable, purely-variable cost with nearly zero devops oversight.
Being this lazy takes a surprising amount of discipline. Getting good at managing a serverless codebase and serverless infrastructure is nontrivial. It means building extensive practices around testing and automation, which means an even larger upfront time investment. Integrating with a managed service can be unbelievably painful, with days spent trying to understand all of the gaps, gotchas, and edge cases. The temptation to implement a proprietary solution can be incredible, especially when it means a story can be done in a matter of minutes or hours instead of days or longer.
It means writing wonky workarounds when a service only accommodates 80% of a developer’s needs. And as the missing 20% of functionality is released, it means refactoring code to remove the workaround, even when it is working just fine and there is no near-term benefit to changing it. The substantial early time investment means that a serverless/managed-service-first approach is not right for every startup. The most important question to ask is, over what time scale do we need to be fast? If the answer is days or weeks, as is the case for many very early-stage startups, it is probably not the right approach.
But if the timescale for velocity optimization has shifted from days or weeks to months or years, it is worth taking a close look at going serverless.
Recruiting great engineers is extraordinarily hard—and only getting harder. It is a tremendous competitive advantage to task those engineers with building differentiated business functionality while your competitors build services that do commoditized, undifferentiated heavy lifting, and then remain stuck with the maintenance of those services for years to come. Of course, there are certain cases where serverless just doesn’t make sense, but those are disappearing at a rapid rate (for example, Lambda’s 5-minute timeout was recently tripled to 15 minutes)—and reasons such as lock-in or latency are generally nonsense or a thing of the past.
Ultimately, the job of a software startup—and therefore the job of the founder—is to deliver customer value above and beyond the capability of the competition. That job comes down to maximizing development velocity, which, in turn, comes down to mitigating complexity wherever possible. It may be that every codebase, and therefore every startup, is destined to become “a big ball of mud”—the term coined in a 1997 paper to describe the “haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire, spaghetti-code jungle” that every software project seems eventually destined to become.
One day, complexity will grow past a breaking point and development velocity will begin to decline irreversibly, and so the ultimate job of the founder is to push that day off as long as humanly possible. The best way to do that is to keep your ball of mud to the minimum possible size— serverless is the most powerful tool ever developed to do exactly that.
The business case for serverless published first on https://timloewe.tumblr.com/
0 notes
Text
The Benefit of Azure Logic Apps vs Power Automate
With regards to computerizing work processes and coordinating applications, Microsoft offers two powerful apparatuses azure logic apps vs power automate. Both are essential for the Microsoft Power Stage however take special care of various requirements and use cases. Understanding the upsides of each can assist organizations and designers with settling on an educated decision in view of their prerequisites.
0 notes
Text
The Benefit of Azure Logic Apps vs Power Automate
With regards to computerizing work processes and coordinating applications, Microsoft offers two powerful apparatuses azure logic apps vs power automate. Both are essential for the Microsoft Power Stage however take special care of various requirements and use cases. Understanding the upsides of each can assist organizations and designers with settling on an educated decision in view of their prerequisites.
Figuring out Azure Logic Apps
Azure Logic Apps is a cloud-based help that permits clients to automate work processes and incorporate apps, information, administrations, and frameworks. It is intended for designers and IT experts to construct complex work processes that require progressed mix and coordination capacities.
Understanding Power Automate
Power Automate, previously known as Microsoft Stream, is a help pointed toward empowering business clients to make automated work processes between their most loved apps and administrations. It underlines convenience and openness, settling on it an incredible decision for clients who might not have a specialized foundation.
Key Benefits of Azure Logic Apps
High level Coordination Abilities:
Azure Logic Apps succeeds in dealing with mind boggling and enormous scope combinations. It upholds an extensive variety of big business frameworks, remembering for premises frameworks, centralized computers, and outsider administrations.
It can associate with practically any assistance with its broad library of connectors.
Versatility and Execution:
Azure Logic Apps is worked to deal with high volumes of information and exchanges. It scales consequently to fulfill the needs of your work processes.
It offers highlights like equal execution and debatching to improve execution.
Adaptable Turn of events and Arrangement:
Designers can utilize recognizable instruments like Visual Studio and Azure DevOps for advancement and arrangement.
Azure Logic Apps upholds various dialects and advancements, making it adaptable for various improvement needs.
B2B Incorporation and EDI:
It offers hearty help for business-to-business (B2B) reconciliation, including Electronic Information Exchange (EDI) guidelines.
This goes with it an optimal decision for endeavors that need to automate B2B cycles and exchanges.
Undertaking Grade Security and Consistence:
Azure Logic Apps offers undertaking grade security highlights, including job based admittance control (RBAC), oversaw characters, and virtual organization administration endpoints.
It likewise follows a great many industry norms and guidelines.
Key Benefits of Power Automate
Easy to understand Connection point:
Power Automate is planned in light of business clients. Its natural connection point permits clients to make work processes without expecting to compose code.
The intuitive work process creator makes it simple to assemble and alter streams.
Coordination with Microsoft 365:
Power Automate consistently incorporates with Microsoft 365 applications, like SharePoint, OneDrive, Groups, and Viewpoint.
This incorporation improves on the computerization of normal business processes inside the Microsoft environment.
Simulated intelligence Manufacturer and RPA:
Power Automate incorporates computer based intelligence Developer, which empowers clients to add man-made intelligence capacities to their work processes without requiring information science ability.
It likewise upholds Mechanical Cycle Mechanization (RPA), permitting clients to automate monotonous assignments performed on the work area.
Versatile Openness:
Power Automate offers a versatile application, empowering clients to make, make due, and trigger work processes from their cell phones.
This guarantees that computerization can be overseen in a hurry, improving efficiency.
Savvy for Little to Medium Organizations:
Power Automate offers valuing plans that are reasonable for little to medium-sized organizations.
It gives a savvy answer for organizations hoping to automate routine undertakings and cycles without critical speculation.
Picking either Azure Logic Apps and Power Automate
The decision between Azure Logic Apps and Power Automate generally relies upon the particular necessities and capacities of the client or association:
Utilize Azure Logic Apps if:
You want to incorporate with many venture frameworks and outsider administrations.
Your work processes require progressed coordination, versatility, and execution.
You want to help B2B mix and EDI.
Your association has designers and IT experts who can use its high level elements.
Use Power Automate if:
You are a business client searching for a simple to-utilize device to automate work processes.
Your essential use case includes mechanizing processes inside the Microsoft 365 environment.
You really want to add simulated intelligence abilities and RPA to your work processes.
You are searching for a savvy arrangement reasonable for little to medium-sized organizations.
End
Both Azure Logic Apps and Power Automate offer vigorous mechanization capacities, yet they take special care of various crowds and use cases. Azure Logic Apps is great for designers and undertakings requiring progressed mix and versatility, while Power Automate is ideally suited for business clients looking for an easy to use, practical answer for automate regular errands. By understanding the qualities of each instrument, associations can pick the one that best meets their requirements and expand the productivity of their work processes.
0 notes