babbleuk
Babble
29K posts
At Tech Babble, we are dedicated to helping you when it comes to identifying technologies and strategies that will empower your employees in addition to streamlining your operations. We gather and use quality data from IT professionals. We also consider peer-to-peer advice from IT leaders who represent a large web community. Moreover, we also take advantage of a large library of resources availed by leading vendors in the IT industry. Once we gather all this information, we share it with you. We do so through blogs, vendor white papers, community forums, webcasts, software downloads, and online research platforms.
Don't wanna be here? Send us removal request.
babbleuk · 4 years ago
Text
Who is NetApp?
At Cloud Field Day 9 Netapp presented some of its cloud solutions. This comes on the heels of NetApp Insight, the annual corporate event that should give its user base not just new products but also a general overview of the company strategy for the future. NetApp presented a lot of interesting news and projects around multi-cloud data and system management.
The Transition to Data Fabric
This is not the first time that NetApp radically changed its strategy. Do you remember when NetApp was the boring ONTAP-only company? Not that there is anything wrong with ONTAP of course (the storage OS originally designed by NetApp is still at the core of many of its storage appliances). It just can’t be the solution for everything, even if it does work pretty well.
When ONTAP was the only answer to every question (even with StorageGrid and EF systems already part of the portfolio), the company started to look boring and, honestly, not very credible.
The day the Data Fabric vision was announced, I was still skeptical, but this was a huge change for this company, and if they could pull it off I would be really impressed. The company started to develop products like StorageGrid, bought companies like Solidifre, integrated the different product families to make everything work together, and added additional tools to simplify the life of their customers. In the end, ONTAP was no longer the answer to every question, and the company became cool again.
Cloud, Built On Top of Data Fabric
Don’t get me wrong, the vision around Data Fabric was already including the cloud but it was incomplete in some aspects. Data Fabric was developed before the success of Kubernetes, for example, and multi-cloud was still a very distant future. But still, it needed a sort of update.
Now, after Insight and CFD, I think this strategy update feels complete and NetApp is one of the most hybrid-savvy vendors in the market landscape. Projects Astra or even the new VDS (Virtual Desktop Service) use the foundation of Data Fabric, and then build on top of it.
youtube
This is not a storage vendor anymore, not a traditional one at least. it is diversifying and becoming a more credible player at the cloud table. It is also interesting that it is doing it in a way that is not in competition with cloud providers or their traditional partners. In fact, they are presenting themselves as an enabling foundation layer to move data seamlessly from on-premises to the cloud and then manage it consistently, with a similar user experience, across different cloud platforms. CSPs really like the first part of this, while the latter helps their partners find the same environment on which to operate their solutions. From the user perspective, NetApp gives them additional options, increasing their freedom of choice. A win-win-win scenario, one could say.
From the outside, NetApp is building a set of interesting solutions on top of a credible and consistent data management layer. From a certain point of view this strategy is similar to what you can get from VMware, with their stack now available on all clouds and additional solutions built on top of it (like the DRaaS coming from the acquisition of Datrium for example).
youtube
Closing the Circle
I don’t know if NetApp can still be classified as a traditional storage vendor. Yes, revenues coming from storage box sales are still the lion’s share of their income (so they are still “traditional” from one point of view), but the strategy shift is quite visible here and cloud revenues are becoming more relevant, quarter after quarter.
Most enterprises are changing the way they think about IT infrastructures, hybrid and multi-cloud strategies are now the norm with a dramatic impact on how budgets are allocated. Users want to be free and run their applications where business requires it, and a traditional storage vendor is not part of this conversation. It is important to note that from this point of view NetApp is not alone, I mentioned VMware earlier in this post but others like RedHat have similar strategies in my opinion. They all want to build an identical user experience no matter where you deploy your applications (and data).
Will NetApp be able to change again? Will it be a credible cloud vendor? Will they become a true hybrid cloud-storage vendor? I think they did very well with Data Fabric and they are on the right path to repeat themselves. Only time will tell of course, but comparing them with some of the other traditional storage vendors, you can say they are really well positioned to do very well.
from Gigaom https://gigaom.com/2020/11/11/who-is-netapp/
0 notes
babbleuk · 4 years ago
Text
Phish Fight: Securing Enterprise Communications
Yes, much of the world may have moved on from email to social media and culturally dubious TikTok dances, yet traditional electronic mail remains a foundation of business communication. And sadly, it remains a prime vector for malware, data leakage, and phishing attacks that can undermine enterprise protections. It doesn’t have to be that way.
In a just released report titled “GigaOm Radar for Phishing Prevention and Detection,” GigaOm Analyst Simon Gibson surveyed more than a dozen enterprise-focused email security solutions. He found a range of approaches to securing communications that often can be fitted together to provide critical, defense-in-depth protection against even determined attackers.
Figure 1. GigaOm Radar for Email Phishing Prevention and Detection
“When evaluating these vendors and their solutions, it is important to consider your own business and workflow,” Gibson writes in the report, stressing the need to deploy solutions that best address your organization’s business workflow and email traffic. “For some it may be preferable to settle on one comprehensive solution, while for others building a best-of-breed architecture from multiple vendors may be preferable.”
In a field of competent solutions, Gibson found that Forcepoint, purchased recently by Raytheon, stood apart thanks to the layered protections provided by its Advanced Classification Engine. Area 1 and Zimperium, meanwhile, are both leaders that exhibit significant momentum, with Area 1 boosted by its recent solution partnership with Virtru, and Zimperium excelling in its deep commitment to mobile message security.
A mobile focus is timely, Gibson says in a video interview for GigaOm. He says companies are “tuning the spigot on” and enabling unprecedented access and reliance on mobile devices, which is creating an urgent need to get ahead of threats.
vimeo
Gibson’s conclusion in the report? He singles out three things: Defense in depth, awareness of existing patterns and infrastructure, and a healthy respect for the “human factor” that can make security so hard to lock down.
from Gigaom https://gigaom.com/2020/11/06/phish-fight-securing-enterprise-communications/
0 notes
babbleuk · 4 years ago
Text
Lights Out: Why Your Next Data Center May Be Hands-Free
Could we be entering an era of hands-free data centers, where remote software and robotics handle tasks that until now have fallen to human technicians? That prospect may not be as far off as you think, according to a recent InformationWeek article by John Edwards that explores the push to make data centers autonomous. As Edwards reports, the COVID-19 pandemic has helped force the issue, with data centers worldwide operating at sharply reduced headcount.
GigaOm Analyst Ned Bellavance was cited in the article. He urged IT managers to establish the proper foundation for an automation effort, cautioning that existing data center deployments may be difficult to transition to full hands-free operation. He stressed that a homogeneous and standardized environment is important to achieving success.
As a case in point, Bellavance singles out Microsoft’s Project Natick, an effort to develop enclosed data centers that can be deployed in coastal waters on the seafloor. Microsoft in 2018 deployed a 240kW data center with 12 racks and 864 servers off the coast of Scotland as part of its Phase 2 testing. As Bellavance quips:
“If you want to know what [a] true lights-out [data center] looks like, check out Project Natick from Microsoft. It’s pretty hard to send a tech undersea.”
Figure 1: Microsoft techs slide a rack of data center servers and infrastructure into an undersea container for deployment to the seafloor off the coast of Scotland. (Photo by Frank Betermin)
Ambitious projects aside, Bellavance cautioned that achieving a hands-free, lights-out data center is no small task.
“The fact is, it is incredibly hard to put all the necessary pieces together for a truly lights-out data center. You are looking at a lot of disparate systems that may have their own proprietary format and protocol,” he says.
The good news? Bellavance says progress is being made to establish helpful standards, such as Redfish for out-of-band management of servers, networking, and power management. These efforts are especially important, he explains, because a single tool is unlikely to manage every aspect of the data center.
“For that reason, I would look for management software that does a great job in a specific area and has API hooks for an orchestration layer to grab onto,” Bellavance says.
So how can IT organizations prepare themselves for a bold future full of flying cars and self-driving data centers? Bellavance, who has authored recent GigaOm reports about edge infrastructure and edge colocation, offers a few words of advice:
Pick a side: Either standardize on a single vendor and platform, Bellavance says, or embrace an open standard for management.
Get skilled: Hone your automation and orchestration skills, especially around working with RESTful APIs.
Start small: Begin automating common tasks now and try to find ways you can eliminate trips to the datacenter.
Keep count: Make a list of common hands-on tasks and prioritize them by frequency and complexity.
Get redundant: Invest in hardware with a high level of redundancy and a low mean time to failure.
Fail gracefully: Accept that failures will happen and plan to handle them in a hands-off fashion through proper design and architecture.
Consider AI: AIOps tools (see GigaOm Radar report) promise intelligent anomaly detection and even automated response. It’s worth keeping an eye on these tools, Bellavance says, but be wary of fantastic claims.
from Gigaom https://gigaom.com/2020/10/27/lights-out-why-your-next-data-center-may-be-hands-free/
0 notes
babbleuk · 4 years ago
Text
When Is a DevSecOps Vendor Not a DevSecOps Vendor?
DevOps’ general aim is to enable a more efficient process for producing software and technology solutions and bringing stakeholders together to speed up delivery. But we know from experience that this inherently creative, outcome-driven approach often forgets about one thing until too late in the process—security. Too often, security is brought into the timeline just before deployment, risking last minute headaches and major delays. The security team is pushed into being the Greek chorus of the process, “ruining everyone’s fun” by demanding changes and slowing things down.
But as we know, in the complex, multi-cloud and containerized environment we find ourselves in, security is becoming more important and challenging than ever. And the costs of security failure are not only measured in slower deployment, but in compliance breaches and reputational damage.
The term “DevSecOps” has been coined to characterize how security needs to be at the heart of the DevOps process. This is in part principle and part tools. As a principle, DevSecOps fits with the concept of “shifting left,” that is, ensuring that security is treated as early as possible in the development process. So far, so simple.
From a tooling perspective, however, things get more complicated, not least because the market has seen a number of platforms marketing themselves as DevSecOps. As we have been writing our Key Criteria report on the subject, we have learned that not all DevSecOps vendors are necessarily DevSecOps vendors. Specifically, we have learned to distinguish capabilities that directly enable the goals of DevSecOps from a process perspective, from those designed to support DevSecOps practices. We could define them as: “Those that do, and those that help.”
This is how to tell the two types of vendor apart and how to use them.
Vendors Enabling DevSecOps: “Tools That Do”
A number of tools work to facilitate the DevSecOps process -– let’s bite the bullet and call them DevSecOps tools. They help teams set out each stage of software development, bringing siloed teams together behind a unified vision that allows fast, high-quality development, with security considerations at its core. DevSecOps tools work across the development process, for example:
Create: Help to set and implement policy
Develop: Apply guidance to the process and aid its implementation
Test: Facilitate and guide security testing procedures
Deploy: Provide reports to assure confidence to deploy the application
The key element that sets these tool sets apart is the ability to automate and reduce friction within the development process. They will prompt action, stop a team from moving from one stage to another if the process has not adequately addressed security concerns, and guide the roadmap for the development from start to finish.
Supporting DevSecOps: “Tools That Help”
In this category we place those tools which aid the execution, and monitoring, of good DevSecOps principles. Security scanning and application/infrastructure hardening tools are a key element of these processes: Software composition analysis (SCA) forms a part of the development stage, static/dynamic application security testing (SAST/DAST) is integral to the test stage and runtime app protection (RASP) is a key to the Deploy stage.
Tools like this are a vital part of the security layer of security tooling, especially just before deployment – and they often come with APIs so they can be plugged into the CI/CD process. However, while these capabilities are very important to DevSecOps, they can be seen in more of a supporting role, rather than being DevSecOps tools per se.
DevSecOps-washing is not a good idea for the enterprise
While one might argue that security should never have been shifted right, DevSecOps exists to ensure that security best practices take place across the development lifecycle. A corollary exists to the idea of “tools that help,” namely that organizations implementing these tools are not “doing DevSecOps,” any more than vendors providing these tools are DevSecOps vendors.
The only way to “do” DevSecOps is to fully embrace security at a process management and governance level: This means assessing risk, defining policy, setting review gates, and disallowing progress for insecure deliverables. Organizations that embrace DevSecOps can get help from what we are calling DevSecOps tools, as well as from scanning and hardening tools that help support its goals.
At the end of the day, all security and governance boils down to risk: If you buy a scanning tool so you can check a box that says “DevSecOps,” you are potentially adding to your risk posture, rather than mitigating it. So, get your DevSecOps strategy fixed first, then consider how you can add automation, visibility, and control using “tools that do,” as well as benefit from “tools that help.”
from Gigaom https://gigaom.com/2020/10/26/when-is-a-devsecops-vendor-not-a-devsecops-vendor/
0 notes
babbleuk · 4 years ago
Text
Courting Visibility: Value Stream Management
“If you can’t measure it, you can’t manage it.”
It’s an old management saw, but one rooted in truth, especially in the arena of software development where establishing a common vision and language of success can be so important. DevOps practices have for years given organizations a model for linking and streamlining processes, enabling both continuous integration (CI) and continuous delivery (CD). But as GigaOm VP of Research Jon Collins explains, businesses still struggle with understanding the actual value derived from these processes.
In his recent Strategy Considerations report, “Driving Value Through Visibility,” Jon emphasizes that delivering on innovation requires more than moving from a project to a product mindset. It hinges on the ability to measure success, across both the products you create and the processes you use to create them.
“Across organizations today, we’re seeing DevOps have to up the ante—to move beyond simply doing things faster and towards enabling software-based innovation to take place in a managed, well-governed way,” Jon explains in an interview. “Value Stream Management (VSM) is an essential tool in the DevOps governance tool chest, helping deliver both more efficient pipelines and higher-value results.”
At the core of VSM is the concept of visibility, which makes it possible for organizations to apply the right process to the right goals. In the report, Jon points out that visibility confers the insight needed for managers to set goals and prioritize activities, for process owners to identify bottlenecks and address their causes, and for all stakeholders to gain a common basis on which to collaborate. Within the context of VSM, visibility does two things:
Ensure efficiency (“doing things right”): This is about minimizing wasteful activity and maximizing both productivity and job satisfaction.
Ensure effectiveness (“doing the right thing”): This involves delivering results of maximum value to the business, making best practices repeatable, and replicating success across teams.
Figure 1: VSM Considers Both Efficiency and Effectiveness
This isn’t about process efficiency—well, it is, but that’s more a means than an end when it comes to achieving business goals. Rather, developing visibility is about ensuring that your delivery cycles work in lockstep with the needs of business leaders to yield positive outcomes and maximize your innovation process. In short, it becomes the link between IT and the business—two entities that too often stand at opposite ends of a chasm.
The steep advantage versus AWS comes in large part from Azure Hybrid Benefit licensing, which enables Microsoft customers to apply their existing Windows Server and SQL Server licenses to Azure virtual machines and Azure SQL Database instances. And that can yield more than $10,000 in savings for an Azure migration compared to AWS.
Learn more about Collins’ Strategy Considerations Report, “Driving Value Through Visibility.”
from Gigaom https://gigaom.com/2020/10/23/courting-visibility-value-stream-management/
0 notes
babbleuk · 4 years ago
Text
Application Modernization: A GigaOm Field Test
So-called legacy software gets that name for a reason—it’s done enough for the organization over the years to earn a legacy enabling the business. But as GigaOm Analyst Ned Bellavance notes in a recently published GigaOm benchmark report (“Costs and Benefits of .NET Application Migration to the Cloud”), aging on-premises applications and infrastructure can work against businesses as they seek to scale, innovate, and grow.
A cloud modernization effort can change that. By migrating application logic and functionality to the cloud, enterprises avail themselves of the matchless scalability and managed services offered by major cloud providers. In the report, Bellevance lays out four options for organizations looking to cloudify their application portfolios.
Figure 1: Cloud Application Modernization Spectrum
Rehost: “Lift-and-shift” virtual machines running on on-premises servers to cloud-based servers. Simple and quick.
Replatform: Migrate application logic (say, ASP.NET apps) to a cloud-based Platform as a Service (PaaS) from an on-premises platform. Still simple and adds managed infrastructure, but requires minor code changes.
Refactor: Review and restructure existing code to leverage cloud-based models and services. True cloud focus and deep PaaS integration comes at the cost of major code changes and re-architecting.
Rewrite: Replace existing on-premises applications with cloud-native versions offering similar, if not enhanced functionality. Complex and time consuming, but the resulting cloud-native applications are loosely coupled and independently scalable.
Of these, replatforming offers considerable value and opportunity. Organizations avoid the cost and risk of new application development, while gaining access to powerful managed services and the raw scalability of the cloud.
In the report, Bellavance designed a series of benchmark tests designed to prove out real-world application performance across three, largely equivalent on-premises and cloud-based PaaS infrastructures:
Windows VMs running on VMware
Microsoft Azure using Azure App Service and Azure SQL Database
AWS using Elastic Beanstalk, EC2, and Amazon RDS
His findings? Performance among the three options was roughly on par—unsurprising given that the test environment was designed for equivalency—but the costs varied markedly. The estimated cost of the tested on-premises infrastructure was $69,300, while the equivalent cost for AWS was $43,060. By contrast, for .NET shops moving to Azure, the cost was even lower—just $31,824.
The steep advantage versus AWS comes in large part from Azure Hybrid Benefit licensing, which enables Microsoft customers to apply their existing Windows Server and SQL Server licenses to Azure virtual machines and Azure SQL Database instances. And that can yield more than $10,000 in savings for an Azure migration compared to AWS.
Read the full GigaOm Report, “Costs and Benefits of .NET Application Migration to the Cloud.”
from Gigaom https://gigaom.com/2020/10/20/application-modernization-a-gigaom-field-test/
0 notes
babbleuk · 4 years ago
Text
Top Property Management Companies in London
Property Management Companies in London aren’t hard to come by and if you lack the expertise needed to manage your properties or deal with tenants then you have the hard time deciding which one is right for you. Is your real estate portfolio so extensive that you lack time to manage all your properties? Or […] from Babble Business https://www.babble.uk.com/top-property-management-companies-in-london/?utm_source=rss&utm_medium=rss&utm_campaign=top-property-management-companies-in-london
0 notes
babbleuk · 4 years ago
Text
DevOps’ Next Level: Value Stream Management
DevOps’ Next Level: Value Stream Management
Faster isn’t better when you’re running full speed the wrong way. That’s the message coming from the nascent discipline of value stream management (VSM), which seeks to refine and improve the efforts of DevOps teams steeped in the Agile culture of sprints and rapid releases. VSM tools help managers look beyond the numbers on their stopwatch to understand—and act on—the actual value being delivered.
In his just-published report, “GigaOm Radar for Value Stream Management,” GigaOM VP of Research Jon Collins completes an in-depth exploration of the VSM space that includes his earlier report, “Key Criteria for Evaluating Value Stream Management,” published this summer. The just-published Radar report provides a detailed breakdown and assessment of VSM tools and vendors, giving IT decision makers the tools they need to make informed decisions around a VSM deployment.
“Back in the old world, we had very tightly defined lifecycles (remember waterfall?), which were too constraining and therefore dropped in the name of agility and speed. But this had the effect of throwing the baby out with the bathwater, resulting in fragmented, complex pipelines and a loss of visibility,” Collins said in an interview. “As more consistent approaches emerge, organizations will be better able to scale up their software-based innovation efforts. I shouldn’t find that exciting, but I do.”
Collins’ exploration reveals a youthful sector occupied by vendors taking different approaches to a common goal. He says that will change as the sector matures and vendors become more consistent in their approaches. But for now, he finds just two offerings—IBM and Tasktop—worthy of space in the Leaders circle in the GigaOm Radar report (**Figure 1**). However, he sees promise in vendors like Digital.ai, Plutora, and CloudBees as they advance their tooling.
Figure 1: GigaOm Radar for Value Stream Management
How should IT decision makers approach this maturing class of solutions? Collins urges a careful approach. He notes in the report that some solutions—Jira, GitLab, and ZenHub—are platform-specific, while others are designed to integrate with diverse toolsets to provide a comprehensive view. Functional approaches vary widely as well.
There are a lot of tough decisions facing teams deploying VSM to enable intelligent inspection and analysis of DevOps processes, and they are made tougher thanks to the rapid pace of advancement in the sector. A thoughtful approach to selecting a platform is critical to ensure it will support your organization’s needs today and in the future.
from Gigaom https://gigaom.com/2020/10/09/devops-next-level-value-stream-management/
0 notes
babbleuk · 4 years ago
Text
The Future of Unstructured Data Management
One of the most interesting and successful research projects I’ve worked on lately was the one about unstructured data management. Our clients loved the Key Criteria and Radar reports, and I had many fascinating conversations with vendors and users about what is coming next in this space.
Why Unstructured Data Management
First, let’s be clear here—explosive data growth is not something you can bargain with or avoid. You can’t stop it. Human-generated data has been joined by a growing host of sensors, cameras, and countless other devices that are capable of producing overwhelming amounts of data for an incredibly diverse range of use cases. Most of this data we are keeping in our on-premises and cloud storage systems. Some of this data is analyzed almost immediately and then lays dormant for quite a long time, sometimes forever. There are plenty of reasons to keep data around for long periods of time—internal policy, compliance, regulations, you name it.
Traditional storage systems are not designed to cope with this. The capacities required by all this data gets out of hand, which is why scale-out architectures are being broadly adopted by organizations of every type, no matter the size. Scale-out Infrastructure complexity is no longer an issue and it can be managed by every system administrator. But there are at least three challenges that remain:
Correct data placement: Data is not created equal and its value may change over time. The concept of primary and secondary storage are well worn, but it offers insight into where your spending should go. Primary, secondary, and tertiary data provide bright targets for budgeting spend based on $/GB, with attendant impacts on latency, throughput, and scalability in the storage system. Modern solutions can take advantage of object and cloud storage to further optimize data placement, leaving the user with several options to expand system capacity at a low cost.
Data silos: These have always been a challenge, even with new technology like automated tiering to address them. The rise of hybrid and multi-cloud infrastructures has organizations working to get data closer to users and applications, and that is creating a growing amount of data silos. They are difficult to manage and can turn even the most basic operations into a nightmare.
Understand value: Dispersing data across multiple environments makes it harder to find it, analyze it, and ultimately understand its real value. And without that understanding, it is very hard to manage cost or data placement strategies.
Figure 1. Traditional Storage Systems
This is only the beginning. In my report I talked about infrastructure-focused and business-oriented data management: The first is aimed at improving infrastructure TCO, but it is the latter that can really bring the biggest return on investment (ROI) and amplify the value of your data. At the end of the day, you need to adopt the right data management practices and tools to respond correctly to the demanding requirements imposed by business and regulations.
Why the Future of Data Management is in the Cloud
The trend is clear: The future of IT infrastructures is hybrid, with data distributed across on-premises systems and multiple clouds no matter the size of your organization. If you think about this scenario, you realize that it requires a different and modern approach to data management. Some of the key criteria I analyzed in my GigaOm report will become even more important and provide the foundation for the next generation of products and services to manage your unstructured data estate. These include:
Virtual global data lakes: You can’t fight data growth or its inevitable sprawl of data silos. The larger the organization, the more data sources and repositories you must manage. What you can do is consolidate these repositories into a single, virtual domain. This approach lets you implement global indexing and establish the foundation for everything you need on top of it: search, analytics, reporting, security, auditing, and so on.
SaaS-based tools: Data management should be global, flexible, and adaptable. You don’t want to court risk with a data management solution — including SaaS-based solutions — that are limited in reach (i.e. the use cases they support) or available resources, or, are unable to scale quickly in capacity and functionality when needed. Additionally, by concentrating on a single SaaS platform, the infrastructure is dramatically simplified and data silos virtually eliminated. Users and system administrators can take advantage of a global view of all the data to manage multiple applications and workflows while saving time and getting results faster. More so, a SaaS-based data management solution is more approachable by mid-size IT organizations, making data management affordable to a broader spectrum of organizations.
App marketplaces: App marketplaces are still a new thing, but extending the basic functionality of the product to address new use cases has several benefits. First, a SaaS deployment model eliminates the need for physical infrastructure to accommodate the application and its necessary copies of the data. And second, it enables organizations to extend the data management platform with additional applications, improving reach across the org and taking advantage of the virtual global data lake to improve several processes.
Figure 2. Hybrid Storage Systems
The connection between virtual global data lakes, SaaS, and marketplaces is important. It creates a universal data domain that enables complete visibility and reuse of data.
Virtual data lakes are easy to create with the right technology. For example, as described in the following flow chart, the backup process can be instrumental in the creation of a virtual data lake. All data is automatically collected and indexed while ingested, becoming immediately available for data management tasks.
Figure 3. Unstructured Data Management Flow Chart
At this point, the potential use cases are limited only by the applications offered in the marketplace. Think about it:
Global search: Imagine a private Google-like experience with all the necessary access control mechanisms. Creation of legal holdings on specific queries. Creation of data sets for compliance checks or other applications.
Security analytics: AI-based analytics tools that scan your data for access patterns to discover every possible security threat including ransomware, data leaks, and data breaches.
Compliance: Scan for specific patterns inside your files to prevent data privacy issues. System to create “Right to be forgotten” reports for authorities, document checks, and so on.
These are only a few examples. Having access to all of your data from a single and extensible platform will open a world of possibilities and cost savings. What’s more, the pay-as-you-go model of the cloud enables enterprises of all sizes to use only the apps they need when they need them, simplifying adoption and testing of new use cases while, again, keeping costs under control.
At the end of the day, by adopting a SaaS-based global data management solution, the user can quickly optimize costs and improve overall infrastructure TCO. This is only the low hanging fruit though. In fact, the business owners will have powerful tools to increment the value of data stored in their systems while responding adequately to several threats posed by poor data management.
Figure 4. All Risks Associated With Poor Unstructured Data Management Are Connected
Closing the Circle
Data management is becoming the pillar to make storage infrastructures sustainable over time and it is the only way to plan investments in the right way.
Users are starting to adopt multi-cloud infrastructures and they need to respond to a growing number of challenges, with some of them around infrastructure efficiency and cost reduction. But with data dispersed across several repositories, the focus is shifting toward more complex and business-focused requirements.
Cloud-based data management solutions can be the answer if implemented correctly, by creating global virtual data lakes that can be accessed by many applications and users simultaneously. In this context, the ease of use and the consistent user experience provided by a SaaS solution and a good app marketplace will be key to attract different personas in the organization. And not only that, with this kind of approach (SaaS and app marketplace), data management is democratized and available to a broader audience no matter how small is your IT staff or size of the organization.
from Gigaom https://gigaom.com/2020/09/30/the-future-of-unstructured-data-management/
0 notes
babbleuk · 4 years ago
Text
Hyperconvergence and Kubernetes
Hyperconverged infrastructure (HCI) has quickly earned a place in the datacenter, mostly due to the promise of infrastructure simplification. HCI has already worked very well for virtualized infrastructures, but will this be the case with Kubernetes? There’s reason for optimism, and I offer a couple thoughts here as to why.
HCI is one of many ways to build your computing stack. The idea is to virtualize and collapse several components of the stack, including storage and networking, alongside compute resources (virtual machines). The approach trades off some performance for enhanced flexibility and ease of use. In its early iterations, HCI was a good fit for mostly small-to-medium sized businesses (SMBs) and vertical applications such as virtual desktop infrastructure (VDI). Now the performance gap has narrowed and HCI can be leveraged across a broader range of applications. Some IT organizations have made HCI their go-to technology, with 90% of their data centers built around HCI!
Kubernetes and HCI
The question before us now is: “Is HCI good for Kubernetes?” The short answer is yes, but there are a few aspects to consider first.
Kubernetes is a container orchestrator that usually runs on Linux operating systems. Applications are deployed in containers that are then organized in Pods (a pod being the minimal allocation unit for Kubernetes and which can comprise one or more containers). Unlike virtualized infrastructures, where each single VM has a different operating system, the container shares most of its basic components with the underlying operating system. From this point of view virtualization is unnecessary and expensive, but (and there is always a but), the reality is more complex for two reasons.
First, few enterprises can migrate to a 100% container environment overnight. This means that VMs and containers must live together for a very long time. In some cases the application will be hybrid forever. Some components will remain virtualized—an old commercial database in a VM for example—accessed by the containerized application. This could happen for several reasons, sometimes just because the virtualized component can’t be deployed in a container or it is too expensive to migrate.
Second, Kubernetes is just an orchestrator. Additional components are needed to make the Kubernetes cluster able to respond adequately to enterprise needs. This includes data storage and networking, especially when the applications are stateful. And managing stateful applications was considered non-essential at the beginning, but which is becoming a standard for many Kubernetes deployments.
So, in the end, we have two needs: management of a hybrid environment and infrastructure simplification. Again, HCI looks more interesting than ever in this context.
HCI for Kubernetes
There are at least three examples that I can make to explain why HCI can be beneficial to your Kubernetes strategy:
VMWare. You probably already know about VMware’s efforts around Kubernetes (here’s a free report I wrote not long ago about VMware Tanzu Portfolio). VMware simply integrated Kubernetes with its hypervisor. Even though this could be seen as an aberration by Kubernetes purists, there are advantages in having VMs and containers integrated. The cost of the VMware licenses can be challenging but, if we think in terms of TCO, it will be easier to manage than a complex hybrid environment.
Nutanix. Nutanix has a solution that allows you to implement Kubernetes transparently on top of its own hypervisor (and cloud now). It takes a different approach to the problem than VMware, but the benefits to the user are similar.
Diamanti. Diamanti goes in an entirely different direction, with storage and networking components that are integrated in the platform and optimized for Kubernetes. This design overcomes some of the limitations of the orchestrator and improves its overall efficiency to bring performance and simplicity to the table. If you plan to invest heavily in Kubernetes, Diamanti offers a valid alternative to both general-purpose HCI and bare-metal Kubernetes.
Here a video about the Diamanti Architecture and how it is different from the others.
youtube
Closing the Circle
Enterprises should look into HCI for Kubernetes for the same reason they loved HCI for virtualization. There are two approaches to consider: One that extends the existing HCI platform to include Kubernetes, and the other that employs a dedicated HCI for Kubernetes. Both approaches have benefits and drawbacks and your choice will depend on how critical Kubernetes is to your overall IT strategy now and for the next couple of years. Other important aspects to consider include the scope of your infrastructure and the level of efficiency you need to achieve from it.
The VMware and Nutanix solutions are both solid and will help you manage a seamless transition from virtualization to a hybrid (VMs + containers) environment. Meanwhile, solutions like Diamanti can combine the simplicity of HCI with the efficiency of a dedicated solution.
from Gigaom https://gigaom.com/2020/09/15/hyperconvergence-and-kubernetes/
0 notes
babbleuk · 4 years ago
Text
Is Scale-Out File Storage the New Black?
Storage vendors, especially startups, are finally supporting new media types in their systems. In this article I want to talk about the impact that quad-level cell (QLC) NAND memory and Intel Optane have in the development of scale-out systems and how vendors like VAST Data are taking full advantage of them in their product design.
In my recent GigaOm report about scale-out file storage (Key Criteria for Evaluating Scale-Out File Storage), I mentioned several key criteria that can help users evaluate this type of storage system. Today I want to explore specific solution functionality that can make a difference in a scale-out storage deployment, and how it should impact your thinking.
Key Criteria, Considered
Before I go further, however, I want to dive into the structure of our GigaOm Key Criteria reports and how they help inform decision making. For each sector we assess, the Key Criteria report explores three sets of criteria specific to that sector. They are:
Table stakes are solution features or characteristics that you should be able to take for granted in a product class. For example, if the class were cars, things like infotainment systems and air conditioning are now standard in every car, and therefore are not generally significant in driving a decision.
Emerging technologies are all about what’s next—what features or capabilities can you expect to emerge in the near future (12-18 months). For cars, that might be the first implementation of autonomous driving for highways. I honestly don’t know—I’m not an expert in car technology.
Key criteria are the core of the report and address features and characteristics that really make a difference in assessing solutions. In a car, this might be the use of an electric powertrain or advanced safety devices to protect you and pedestrians from accidents.
In addition, the Key Criteria report breaks down a number of evaluation metrics, which are broad, top-line characteristics of solutions being evaluated, and are helpful in comparing the relative value of solutions in specific areas. For cars, these top-line characteristics might be performance, comfort, fuel efficiency, range, and so on.
These reports finish with an analysis of the impact that each of the key criteria has on the high-level evaluation metrics. This indicates whether a particular solution might meet your needs. For example, an electric powertrain may have a significant, positive effect on metrics like comfort, performance, and fuel efficiency, while its impact on vehicle range is less beneficial.
In the Key Criteria report for scale-out storage, the important differentiating key criteria we analyzed were:
Integration with object storage
Integration with the public cloud
New flash-memory devices
System management
Data management
Now I want to focus on one of these key criteria and give you an example of a vendor that has interpreted this important metric in a very innovative and efficient way.
Flash Memory and Scale-Out
I explore the impact of flash memory storage on scale-out systems in depth. The following excerpt comes directly from my GigaOm report, “Key Criteria for Evaluating Scale-Out File Storage.”
Modern scale-out file storage is utilized for a growing number of workloads that need capacity associated with high performance. The data stored in these systems can be anything from very large media files down to files a few kilobytes in size. Finding the right combination of $/GB and $/IOPS is not easy but in general, active data is only a small percentage of the total capacity. As a consequence, flash memory is becoming the standard for active workloads, while object storage (either an on-premises, HDD-based or cloud-object store), houses inactive data. This two-tier approach is explained in this gigaom report.
Hard drives still provide the best $/GB ratio, but to stay performance-competitive with solid state flash media solutions, HDD vendors are developing new technologies and access methods to close the gap. Unfortunately, these approaches add complexity that can actually make HDD utilization in standard enterprise arrays more and more difficult.
On the other hand, even though MLC and TLC flash memory prices have been falling steadily for quite some time, the $/GB ratio remains too high to satisfy the capacity requirements of some applications. Meanwhile, performance of standard flash memory falls short for applications that require accessing data at memory speed without the cost, persistence and capacity limitations imposed by RAM. To address this price/performance conundrum, the memory industry has come up with two solutions:
Quad-level cell (QLC) NAND memory is the next iterative leap in the evolution of flash memory, thanks to new manufacturing techniques that stack up to 96 layers of cells in the same chip (and even more for the next generation). This density enables vendors to build very cheap and high-capacity storage devices.
Unfortunately, these devices have a few important drawbacks. They have a very weak write endurance (up to just 500 write cycles) and a much lower write speed compared to MLC SSDs. In comparison, an MLC NAND device can endure between ten and twenty thousand write cycles, and write speeds can reach up to four times higher than QLC. The combination of lower $/GB ratios and higher media density and efficiency can significantly impact the overall TCO of a solution. Not all architectures are ready for QLC 3D NAND. How it is implemented makes the difference in terms of efficiency, TCA, and TCO.
Memory-class storage (i.e. Intel Optane) is gaining traction in the enterprise storage market. This new kind of device bridges the gap between DRAM and NAND memory in terms of latency, cost, and features. It is not as fast as RAM, but prevents access to slower storage and improves overall system response and availability when used in persistent mode.
In addition, storage vendors can use this media for caching and as a landing zone for hot data for performing operations like compression, optimization, and erasure coding. It is also useful for fast handling of metadata. In any case, the user can expect a general improvement in performance, at reasonable cost.
These new classes of memory do not remove HDDs from the game. Spinning media remains viable as near-line tier storage in modern systems and can be effective in large systems that consolidate multiple workloads on a single storage system, providing a capacity tier at low cost. That said, organizations are moving away from hybrid configurations in favor of all-flash storage systems associated with an external object store, or they take advantage of the cloud to develop the necessary capacity.
Impact Analysis: VAST Data
VAST Data, a startup that is doing really well with its scale-out file storage solution, has implemented Intel Optane and QLC-NAND synergistically in their system with great results. Intel Optane is used as a landing area for all data. This really isn’t a cache but more a staging area where data is prepared, chunked, and organized to minimize write amplification and optimize operations in the QLC back end.
By implementing this data path, which is associated with innovative data compaction and protection techniques, the system can achieve impressive $/GB while maintaining performance throughput at the front end. This makes the solution a good fit for workloads that need both capacity and performance at a reasonable cost. This is an oversimplification, perhaps, but I encourage you to check out the videos recorded during Storage Field Day 20, to get a good grasp of VAST Data architecture.
youtube
As mentioned earlier, the impact that key criteria have on evaluation metrics provides a better understanding of the solution and, ultimately, how well the product might fit your needs.
Table 2 shows the high impact that new flash memory devices have on performance, scalability, system lifespan, TCO and ROI, and flexibility. Only usability is minimally impacted. What this means is if you place value on the metrics that are highly impacted by this key criterion, you should put VAST on your short list for a scale-out storage evaluation.
Figure 1. The Impact of Key Criteria on Evaluation Metrics
Closing the Circle
The research at GigaOm extends beyond the Key Criteria report, which sets the table for the detailed market sector analysis in our GigaOm Radar reports. The Key Criteria report is essentially a structured overview of a product sector, while the Radar report presents a market landscape that summarizes all the key criteria and metrics evaluations for each vendor and positions them together on a chart.
One of the most important characteristics of the Radar report is that it is technology focused and doesn’t really take into account market share. This may sound odd when compared to analysis available from other sources, but our commitment is to put the technology first, and to help IT decision makers understand both what a solution can do for their organizations and where its development is going. As I often say, it is a forward-looking approach rather than a conservative, backward-looking approach–which approach would you prefer to inform your IT organization?
As you can see in the diagram below (Figure 2), VAST is well-positioned in the Leaders circle of the GigaOm Radar report chart and was graded as an Outperformer. This reflects the company’s aggressive forward movement not just with its flash-memory implementation, but its success shaping its solution to solve challenges with unstructured data. VAST is a storage vendor that is worth a look.
Figure 2. GigaOm Radar Chart for Scale-Out File Systems
from Gigaom https://gigaom.com/2020/09/14/is-scale-out-file-storage-the-new-black/
0 notes
babbleuk · 4 years ago
Text
Antarctica and Greenland Are on Track for the Worst-Case Climate Scenario
As the world’s ice sheets have been cracking up and melting down, climatologists have warned that further decimation could cause devastating levels of sea level rise from gizmodo http://www.gizmodo.co.uk/2020/09/antarctica-and-greenland-are-on-track-for-the-worst-case-climate-scenario/
0 notes
babbleuk · 4 years ago
Text
Literally Pause the Internet With This Built-in Browser Tool
Sometimes the best thing to do is step away from your desk, yet it’s not easy when there are a billion things on our screens to hold our attention. from gizmodo http://www.gizmodo.co.uk/2020/09/literally-pause-the-internet-with-this-built-in-browser-tool/
0 notes
babbleuk · 4 years ago
Text
Microsoft May Release a Cheaper Surface Laptop This Fall
Based on some new rumors, in addition to refreshes and spec bumps for existing systems, Microsoft may also release a new more affordable version of its Surface Laptop. from gizmodo http://www.gizmodo.co.uk/2020/09/microsoft-may-release-a-cheaper-surface-laptop-this-fall/
0 notes
babbleuk · 4 years ago
Text
Disney's Mulan Faces Backlash for Thanking Chinese Region Marred by Human Rights Violations
Disney stands accused of overlooking actual oppression of Muslims in China to make its movie about a woman who overcomes systemic oppression from gizmodo http://www.gizmodo.co.uk/2020/09/disneys-mulan-faces-backlash-for-thanking-chinese-region-marred-by-human-rights-violations/
0 notes
babbleuk · 4 years ago
Text
Another Sweeping Search for Aliens Comes Up Short
A groundbreaking survey of over 10 million star systems has failed to detect signs of extraterrestrial intelligence. from gizmodo http://www.gizmodo.co.uk/2020/09/another-sweeping-search-for-aliens-comes-up-short/
0 notes
babbleuk · 4 years ago
Text
15 Things You Can Do in Android 11 That You Couldn’t Do Before
If you’re able to upgrade, here are 15 new tricks you can try that aren’t available on devices still running Android 10 or older. from gizmodo http://www.gizmodo.co.uk/2020/09/15-things-you-can-do-in-android-11-that-you-couldnt-do-before/
0 notes