#opentelemetry
Explore tagged Tumblr posts
Text
Cassandra To Spanner Proxy Adaptor Eases Yahoo’s Migration
Yahoo’s migration process is made easier with a new Cassandra to Spanner adapter.
A popular key-value NoSQL database for applications like caching, session management, and real-time analytics that demand quick data retrieval and storage is Cassandra. High performance and ease of maintenance are ensured by its straightforward key-value pair structure, particularly for huge datasets.
- Advertisement -
However, this simplicity also has drawbacks, such as inadequate support for sophisticated queries, the possibility of data repetition, and challenges when it comes to modeling complicated relationships. In order to position itself for classic Cassandra workloads, Spanner, Google Cloud’s always-on, globally consistent, and nearly infinite-scale database, blends the scalability and availability of NoSQL with the strong consistency and relational nature of traditional databases. With the release of the Cassandra to Spanner Proxy Adapter, an open-source solution allowing plug-and-play migrations of Cassandra workloads to Spanner without requiring modifications to the application logic, switching from Cassandra to Spanner is now simpler than ever.
Spanner for NoSQL workloads
Strong consistency, high availability, nearly infinite scalability, and a well-known relational data model with support for SQL and ACID transactions for data integrity are all features that Spanner offers. Being a fully managed service, it facilitates operational simplification and frees up teams to concentrate on developing applications rather than managing databases. Additionally, by reducing database downtime, Spanner’s high availability even on a vast global scale supports business continuity.
Spanner is always changing to satisfy the demands of contemporary companies. Improved multi-model capabilities including graph, full-text, and vector searches, higher analytical query performance with Spanner Data Boost, and special enterprise features like geo-partitioning and dual-region settings are some of the most recent Spanner capabilities. These potent features, together with Spanner’s alluring price-performance, provide up a world of fascinating new opportunities for Cassandra users.
Yahoo has put the Cassandra to Spanner adapter to the test
“Spanner sounds like a leap forward from Cassandra,” in case you were wondering. How can I begin? The proxy adapter offers a plug-and-play method for sending Cassandra Query Language (CQL) traffic from your client apps to Spanner. The adapter works as the application’s Cassandra client behind the scenes, but it communicates with Spanner internally for all data manipulation operations. The Cassandra to Spanner proxy adapter simply works without requiring you to migrate your application code!
Yahoo benefited from increased performance, scalability, consistency, and operational efficiency after successfully migrating from Cassandra to Spanner. Additionally, the proxy adapter made the migration process simple.
Reltio is another Google Cloud client that has made the switch from Cassandra to Spanner. Reltio gained the advantages of a fully managed, globally distributed, and highly consistent database while minimizing downtime and service disruption through an easy migration process.
These success examples show that companies looking to upgrade their data architecture, uncover new capabilities, and spur creativity may find that switching from Cassandra to Spanner is a game-changer.
How is your migration made easier by the new proxy adapter? The following procedures are involved in a typical database migration:Image credit to Google Cloud
Some of these stages are more complicated than others, such as moving your application (step 4) and moving the data (step 6). Migrating a Cassandra-backed application to point to Spanner is made much easier by the proxy adaptor. A high-level summary of the procedures needed to use the new proxy adapter is provided here:
Assessment: After switching to Spanner, determine which of your Cassandra schema, data model, and query patterns may be made simpler.
Schema design: The documentation thoroughly discusses the similarities and differences between Spanner’s and Cassandra’s table declaration syntax and data types. For optimum efficiency, you can additionally utilize relational features and capabilities with Spanner, such as interleaved tables.
Data migration: To move your data, follow these steps:
Bulk load: Utilizing programs like the Spanner Dataflow connector or BigQuery reverse ETL, export data from Cassandra and import it into Spanner.
Replicate incoming information: Use Cassandra’s Change Data Capture (CDC) to instantly replicate incoming updates to Spanner from your Cassandra cluster.
Updating your application logic to execute dual-writes to Cassandra and Spanner is an additional option. If you want to make as few modifications to your application code as possible, google Cloud do not advise using this method.
Update your Cassandra setup and set up the proxy adapter: The Cassandra to Spanner Proxy Adapter operates as a sidecar next to your application; download and start it. The proxy adapter uses port 9042 by default. Remember to modify your application code to point to the proxy adapter if you choose to use a different port.
Testing: To make sure everything functions as planned, thoroughly test your migrated application and data in a non-production setting.
Cutover: Move your application traffic to Spanner as soon as you are comfortable with the migration. Keep a watchful eye out for any problems and adjust performance as necessary.
What does the new proxy adapter’s internal components look like?
The application sees the new proxy adaptor as a Cassandra client. The Cassandra endpoint’s IP address or hostname has changed to point to the proxy adapter, which is the only discernible change from the application’s point of view. This simplifies the Spanner migration without necessitating significant changes to the application code.Image credit to Google Cloud
To provide a one-to-one mapping between every Cassandra cluster and its matching Spanner database, Google builds the proxy adapter. A multi-listener architecture is used by the proxy instance, and each listener is connected to a different port. This makes it possible to handle several client connections at once, with each listener controlling a separate connection to the designated Spanner database.
The complexities of the Cassandra protocol are managed by the translation layer of the proxy. This layer handles buffers and caches, decodes and encodes messages, and most importantly parses incoming CQL queries and converts them into counterparts that are compatible with Spanner.
To gather and export traces to Cloud Trace, the proxy adapter supports OpenTelemetry.
Taking care of common issues and difficulties
Let’s talk about some issues you might be having with your migrations:
Cost: Take a look at Accenture’s benchmark result, which shows that Spanner guarantees cost effectiveness in addition to consistent latency and throughput. To help you utilize all of Spanner’s features, the company has also introduced a new tiered pricing structure called “Spanner editions,” which offers improved cost transparency and cost-saving options.
Latency increases: When executing the proxy adapter in a Docker container, Google advises running it on the same host as the client application (as a side-car proxy) or on the same Docker network to reduce an increase in query latencies. Additionally, Google advised limiting the proxy adapter host’s CPU usage to less than 80%.
Design flexibility: Spanner’s more rigid relational design gives benefits in terms of data integrity, query capability, and consistency, but Cassandra offers more flexibility.
Learning curve: There are some distinctions between Cassandra’s and Spanner’s data types. Examine this thorough material to help with the transition.
Start now
For companies wishing to take advantage of the cloud’s full potential for NoSQL workloads, Spanner is an appealing choice due to its robust consistency, streamlined operations, improved data integrity, and worldwide scalability. Google Cloud is making it simpler to plan and implement your migration strategy with the new Cassandra to Spanner proxy adapter, allowing your company to enter a new era of data-driven innovation.
Read more on govindhtech.com
#Cassandra#SpannerProxyAdaptor#NoSQLdatabase#EasesYahooMigration#Spanner#OpenTelemetry#newproxyadapter#internalcomponents#news#NoSQLworkloads#technology#technews#govindhtech
0 notes
Text
El retador de Datadog, Dash0, tiene como objetivo frenar el impacto del proyecto de ley de observabilidad
El fin de los tipos de interés cero ha llevado a las empresas a buscar ahorros siempre que puedan, pero hay un área que sigue suponiendo una importante pérdida de presupuesto. La observabilidad (recopilación y comprensión de datos y sistemas) suele seguir siendo el segundo mayor gasto en la nube de una organización, justo después del aprovisionamiento de la nube. Se ha llegado incluso a hablar de…
0 notes
Text
0 notes
Text
Checking your OpenTelemetry pipeline with Telemetrygen
Testing OpenTelemetry configuration pipelines without resorting to instrumented applications, particularly for traces, can be a bit of a pain. Typically, you just want to validate you can get an exported/generated signal through your pipeline, which may not be the OpenTelemetry Collector (e.g., FluentBit or commercial solutions such as DataDog). This led to the creation of Tracegen, and then the…
View On WordPress
0 notes
Text
How to Test Service APIs
When you're developing applications, especially when doing so with microservices architecture, API testing is paramount. APIs are an integral part of modern software applications. They provide incredible value, making devices "smart" and ensuring connectivity.
No matter the purpose of an app, it needs reliable APIs to function properly. Service API testing is a process that analyzes multiple endpoints to identify bugs or inconsistencies in the expected behavior. Whether the API connects to databases or web services, issues can render your entire app useless.
Testing is integral to the development process, ensuring all data access goes smoothly. But how do you test service APIs?
Taking Advantage of Kubernetes Local Development
One of the best ways to test service APIs is to use a staging Kubernetes cluster. Local development allows teams to work in isolation in special lightweight environments. These environments mimic real-world operating conditions. However, they're separate from the live application.
Using local testing environments is beneficial for many reasons. One of the biggest is that you can perform all the testing you need before merging, ensuring that your application can continue running smoothly for users. Adding new features and joining code is always a daunting process because there's the risk that issues with the code you add could bring a live application to a screeching halt.
Errors and bugs can have a rippling effect, creating service disruptions that negatively impact the app's performance and the brand's overall reputation.
With Kubernetes local development, your team can work on new features and code changes without affecting what's already available to users. You can create a brand-new testing environment, making it easy to highlight issues that need addressing before the merge. The result is more confident updates and fewer application-crashing problems.
This approach is perfect for testing service APIs. In those lightweight simulated environments, you can perform functionality testing to ensure that the API does what it should, reliability testing to see if it can perform consistently, load testing to check that it can handle a substantial number of calls, security testing to define requirements and more.
Read a similar article about Kubernetes API testing here at this page.
#kubernetes local development#opentelemetry and kubernetes#service mesh and kubernetes#what are dora metrics
0 notes
Text
SigNoz: Free and Open Source Syslog server with OpenTelemetry
SigNoz: Free and Open Source Syslog server with OpenTelemetry @signozhq #homelab #SigNozOpenSourceAlternative #DatadogVsSigNoz #MonitorApplicationsWithSigNoz #ApplicationPerformanceManagementTools #DistributedTracingWithSigNoz #MetricsAndDashboards
I am always on the lookout for new free and open-source tools in the home lab and production environments. One really excellent tool discovered recently is a tool called SigNoz. SigNoz is a free and open-source syslog server and observability program that provides an open-source alternative to Datadog, Relic, and others. Let’s look at SigNoz and see some of the features it offers. We will also…
View On WordPress
#alert systems in observability#application performance management tools#Datadog vs. SigNoz#distributed tracing with SigNoz#exceptions monitoring best practices#log management solutions#metrics and dashboards guide#monitor applications with SigNoz#SigNoz and OpenTelemetry integration#SigNoz open-source alternative
1 note
·
View note
Text
OpenTelemetry Tracing in < 200 lines of code
https://jeremymorrell.dev/blog/minimal-js-tracing/
2 notes
·
View notes
Text
Telemetry Pipelines Workshop: Integrating Fluent Bit With OpenTelemetry, Part 1
http://securitytc.com/TCRLwP
2 notes
·
View notes
Text
0 notes
Photo
Accelerate root cause analysis with OpenTelemetry and AI assistants
0 notes
Text
Opentelemetry vs Prometheus: Opentelemetry Overview
Opentelemetry vs Prometheus
Prometheus monitors, stores, and visualises metrics but does not keep logs or support traces for root cause analysis. The application cases of Prometheus are more limited than OpenTelemetry.
Programming language-agnostic integrations allow OpenTelemetry to track more complicated metrics than Prometheus. Automated instrumentation models make OTel more scalable and extensible than the Prometheus. OpenTelemetry requires a back-end infrastructure and no storage solution, unlike Prometheus.
Quick summary Prometheus calculates cumulative measurements as a total, whereas OpenTelemetry uses deltas. Prometheus stores short-term data and metrics, whereas OTel may be used with a storage solution. OpenTelemetry uses a consolidated API to send or pull metrics, logs, and traces and transform them into a single language, unlike Prometheus. Prometheus pulls data from hosts to collect and store time-series metrics. OTel can translate measurements and is language agonistic, providing developers additional options. Data and metrics are aggregated by Prometheus using PromQL. Web-visualized metrics and customisable alarms are provided by Prometheus. Integration with visualisation tools is required for OpenTelemetry. OTel represents metric values as integers instead of floating-point numbers, which are more precise and understandable. Prometheus cannot use integer metrics. Your organization’s demands will determine which option is best. OpenTelemetry may be better for complex contexts with dispersed systems, data holistic comprehension, and flexibility. This also applies to log and trace monitoring.
Prometheus may be suited for monitoring specific systems or processes using alerting, storage, and visualisation models.
Prometheus and OpenTelemetry Application performance monitoring and optimisation are crucial for software developers and companies. Enterprises have more data to collect and analyse as they deploy more applications. Without the correct tools for monitoring, optimising, storing, and contextualising data, it’s useless.
Monitoring and observability solutions can improve application health by discovering issues before they happen, highlighting bottlenecks, dispersing network traffic, and more. These capabilities reduce application downtime, improve performance, and enhance user experience.
App monitoring tools OpenTelemetry and the Prometheus are open-source Cloud Native Computing Foundation (CNCF) initiatives. An organization’s goals and application specifications determine which data and functions need which solutions. Before using OpenTelemetry or Prometheus, you should know their main distinctions and what they offer.
Java Opentelemetry OTel exports these three forms of telemetry data to Prometheus and other back ends. This lets developers chose their analysis tools and avoids vendor or back-end lock-in. OpenTelemetry integrates with many platforms, including Prometheus, to increase observability. Its flexibility increases because OTel supports Java, Python, JavaScript, and Go. Developers and IT staff may monitor performance from any browser or location.
Its ability to gather and export data across multiple applications and standardise the collecting procedure make OpenTelemetry strong. OTel enhances distributed system and the microservice observability.
For application monitoring, OpenTelemetry and Prometheus integrate and operate well together. The DevOps and IT teams can use OpenTelemetry and Prometheus to collect and transform information for performance insights.
Opentelemetry-demo OpenTelemetry (OTel) helps generate, collect, export, and manage telemetry data including logs, metrics, and traces in one place. OTel was founded by OpenCensus and OpenTracing to standardise data gathering through APIs, SDKs, frameworks, and integrations. OTel lets you build monitoring outputs into your code to ease data processing and export data to the right back end.
Telemetry data helps determine system health and performance. Optimised observability speeds up troubleshooting, improves system reliability, reduces latency, and reduces application downtime.
Opentelemetry architecture APIs OpenTelemetry APIs uniformly translate programming languages. This lets APIs capture telemetry data. These APIs help standardise OpenTelemetry measurements.
SDKs Software development tools. Frameworks, code libraries, and debuggers are software development building elements. OTel SDKs implement OpenTelemetry APIs and provide telemetry data generation and collection tools.
OpenTelemetry collector The OpenTelemetry collector accepts, processes, and exports telemetry data. Set OTel collectors to filter specified data types to the back end.
Instrumentation library OTel offers cross-platform instrumentation. The instrumentation libraries let OTel integrate with any programming language.
Opentelemetry collector contrib Telemetry data including metrics, logs, and traces can be collected without modifying code or metadata using the OpenTelemetry protocol (OTLP).
Metrics A high-level overview of system performance and health is provided via metrics. Developers, IT, and business management teams decide what metrics to track to fulfil business goals for application performance. A team may measure network traffic, latency, and CPU storage. You may also track application performance trends with metrics.
Logs Logs record programme or application events. DevOps teams can monitor component properties with logs. Historical data might demonstrate performance, thresholds exceeded, and errors. Logs track application ecosystem health.
Traces Traces provide a broader picture of application performance than logs and aid optimisation. They track a request through the application stack and are more focused than logs. Traces let developers pinpoint when mistakes or bottlenecks occur, how long they remain, and how they effect the user journey. This data improves microservice management and application performance.
What’s Prometheus? Application metrics are collected and organised using Prometheus, a monitoring and alerting toolkit. SoundCloud created the Prometheus server before making it open-source.
End-to-end time-series data monitoring is possible using Prometheus. Time-series metrics capture regular data, such as monthly sales or daily application traffic. Visibility into this data reveals patterns, trends, and business planning projections. Prometheus collects application metrics for dedicated functions that DevOps teams want to monitor after integration with a host.
Using PromQL, Prometheus metrics offer data points with the metric name, label, timestamp, and value. For better visualisation, PromQL lets developers and IT departments aggregate data metrics into histograms, graphs, and dashboards. Enterprise databases and exporters are accessible to Prometheus. Application exporters pull metrics from apps and endpoints.
Prometheus tracks four metrics Counters Counters measure increasing numerical values. Counters count completed tasks, faults, and processes or microservices.
Gauges Gauges measure numerical data that fluctuate due to external variables. They can monitor CPU, memory, temperature, and queue size.
Histograms Events like request duration and answer size are measured via histograms. They split the range of these measurements into buckets and count how many fall into each bucket.
Summaries Summaries assess request durations and response size like histograms, but they also count and total all observed values.
Prometheus’ data-driven dashboards and graphs are also useful.
Benefits of Prometheus Prometheus provides real-time application monitoring for accurate insights and fast troubleshooting. It also permits function-specific thresholds. When certain thresholds are hit or exceeded, warnings might speed up problem resolution. Prometheus stores and provides analytics teams with massive amounts of metrics data. It stores data for instant examination, not long-term storage. Prometheus typically stores data for two to fifteen days.
Prometheus works perfectly with Kubernetes, an open-source container orchestration technology for scheduling, managing, and scaling containerised workloads. Kubernetes lets companies create hybrid and multicloud systems with many services and microservices. These complicated systems gain full-stack observability and oversight with Prometheus and Kubernetes.
Grafana Opentelemetry Grafana, a powerful visualisation tool, works with Prometheus to create dashboards, charts, graphs, and alerts. Grafana can visualise metrics with Prometheus. The compatibility between these platforms makes complex data easier to share between teams.
Integration of OpenTelemetry with Prometheus No need to choose OpenTelemetry and Prometheus are compatible. Prometheus data models support OpenTelemetry metrics and OTel SDKs may gather them. Together, these systems provide the best of both worlds and enhanced monitoring. As an example:
When combined, OTel and Prometheus monitor complex systems and deliver real-time application insights. OTel’s tracing and monitoring technologies work with Prometheus’ alerting. Prometheus handles big data. This capability and OTel’s ability to combine metrics, traces, and logs into one interface improve system and application scalability. PromQL can generate visualisation models using OpenTelemetry data. To provide additional monitoring tools, OpenTelemetry and Prometheus interface with IBM Instana and Turbonomic. Instana’s connection map, upstream/downstream service connection, and full-stack visibility let OTel monitor all services. They give the same wonderful experience with OTel data as with all other data sources, providing you the context you need to swiftly detect and address application problems. Turbonomic automates real-time data-driven resourcing choices using Prometheus’ data monitoring capabilities. These optimised integrations boost application ecosystem health and performance.
Read more on Govindhtech.com
#Programming#OpenTelemetry#ibm#kubernets#devops#multicloud#apis#microservices#technology#technews#govindhtech
0 notes
Text
From ThoughtWorks radar 10/2024:
[Trial] ClickHouse is an open-source, columnar online analytical processing (OLAP) database for real-time analytics. It started as an experimental project in 2009 and has since matured into a highly performant and linearly scalable analytical database. Its efficient query processing engine together with data compression makes it suitable to run interactive queries without pre-aggregation. ClickHouse is also a great storage choice for OpenTelemetry data. Its integration with Jaeger allows you to store massive volumes of traces and analyze them efficiently.
0 notes
Text
Observability in Action - Book Review
With the Christmas holidays happening, things slowed down enough to sit and catch up on some reading – which included reading Cloud Observability in Action by Michael Hausenblas from Manning. You could ask – why would I read a book about a domain you’ve written about (Logging In Action with Fluentd) and have an active book in development (Fluent Bit with Kubernetes)? The truth is, it’s good to…
View On WordPress
0 notes
Text
SRE Technologies: Transforming the Future of Reliability Engineering
In the rapidly evolving digital landscape, the need for robust, scalable, and resilient infrastructure has never been more critical. Enter Site Reliability Engineering (SRE) technologies—a blend of software engineering and IT operations aimed at creating a bridge between development and operations, enhancing system reliability and efficiency. As organizations strive to deliver consistent and reliable services, SRE technologies are becoming indispensable. In this blog, we’ll explore the latest trends in SRE technologies that are shaping the future of reliability engineering.
1. Automation and AI in SRE
Automation is the cornerstone of SRE, reducing manual intervention and enabling teams to manage large-scale systems effectively. With advancements in AI and machine learning, SRE technologies are evolving to include intelligent automation tools that can predict, detect, and resolve issues autonomously. Predictive analytics powered by AI can foresee potential system failures, enabling proactive incident management and reducing downtime.
Key Tools:
PagerDuty: Integrates machine learning to optimize alert management and incident response.
Ansible & Terraform: Automate infrastructure as code, ensuring consistent and error-free deployments.
2. Observability Beyond Monitoring
Traditional monitoring focuses on collecting data from pre-defined points, but it often falls short in complex environments. Modern SRE technologies emphasize observability, providing a comprehensive view of the system’s health through metrics, logs, and traces. This approach allows SREs to understand the 'why' behind failures and bottlenecks, making troubleshooting more efficient.
Key Tools:
Grafana & Prometheus: For real-time metric visualization and alerting.
OpenTelemetry: Standardizes the collection of telemetry data across services.
3. Service Mesh for Microservices Management
With the rise of microservices architecture, managing inter-service communication has become a complex task. Service mesh technologies, like Istio and Linkerd, offer solutions by providing a dedicated infrastructure layer for service-to-service communication. These SRE technologies enable better control over traffic management, security, and observability, ensuring that microservices-based applications run smoothly.
Benefits:
Traffic Control: Advanced routing, retries, and timeouts.
Security: Mutual TLS authentication and authorization.
4. Chaos Engineering for Resilience Testing
Chaos engineering is gaining traction as an essential SRE technology for testing system resilience. By intentionally introducing failures into a system, teams can understand how services respond to disruptions and identify weak points. This proactive approach ensures that systems are resilient and capable of recovering from unexpected outages.
Key Tools:
Chaos Monkey: Simulates random instance failures to test resilience.
Gremlin: Offers a suite of tools to inject chaos at various levels of the infrastructure.
5. CI/CD Integration for Continuous Reliability
Continuous Integration and Continuous Deployment (CI/CD) pipelines are critical for maintaining system reliability in dynamic environments. Integrating SRE practices into CI/CD pipelines allows teams to automate testing and validation, ensuring that only stable and reliable code makes it to production. This integration also supports faster rollbacks and better incident management, enhancing overall system reliability.
Key Tools:
Jenkins & GitLab CI: Automate build, test, and deployment processes.
Spinnaker: Provides advanced deployment strategies, including canary releases and blue-green deployments.
6. Site Reliability as Code (SRaaC)
As SRE evolves, the concept of Site Reliability as Code (SRaaC) is emerging. SRaaC involves defining SRE practices and configurations in code, making it easier to version, review, and automate. This approach brings a new level of consistency and repeatability to SRE processes, enabling teams to scale their practices efficiently.
Key Tools:
Pulumi: Allows infrastructure and policies to be defined using familiar programming languages.
AWS CloudFormation: Automates infrastructure provisioning using templates.
7. Enhanced Security with DevSecOps
Security is a growing concern in SRE practices, leading to the integration of DevSecOps—embedding security into every stage of the development and operations lifecycle. SRE technologies are now incorporating automated security checks and compliance validation to ensure that systems are not only reliable but also secure.
Key Tools:
HashiCorp Vault: Manages secrets and encrypts sensitive data.
Aqua Security: Provides comprehensive security for cloud-native applications.
Conclusion
The landscape of SRE technologies is rapidly evolving, with new tools and methodologies emerging to meet the challenges of modern, distributed systems. From AI-driven automation to chaos engineering and beyond, these technologies are revolutionizing the way we approach system reliability. For organizations striving to deliver robust, scalable, and secure services, staying ahead of the curve with the latest SRE technologies is essential. As we move forward, we can expect even more innovation in this space, driving the future of reliability engineering.
0 notes
Text
Instrumenting a React App Using OpenTelemetry
Learn how to get started with OpenTelemetry in a React app with basic and auto-instrumentation, as well as adding custom spans and metrics.
@tonyshan #techinnovation https://bit.ly/tonyshan https://bit.ly/tonyshan_X
0 notes
Video
youtube
Master Tracing in Microservices 🔍 | Complete OpenTelemetry & Honeycomb Tutorial for Synchronous & Asynchronous Systems https://youtu.be/0qOfgj2e0og
0 notes