Tumgik
#Mean Time to Detect (MTTD)
enlume · 4 months
Text
0 notes
deployvector · 21 hours
Text
Enhancing IT Security with Vector’s Threat Detection
In an era where cyber threats are more sophisticated than ever, the need for early threat detection for businesses has become more important. Cyberattacks are no longer a matter of "if" but "when." To combat these evolving threats, organizations must employ advanced security measures that ensure real-time protection. Vector offers a comprehensive suite of security tools designed to enhance cybersecurity, including advanced threat detection and proactive response mechanisms. With its cutting-edge AI-driven capabilities, Vector delivers unmatched security solutions that identify and mitigate risks before they escalate.
AI-Driven Threat Detection: The Future of IT Security
The cornerstone of Vector’s security is its AI-driven threat detection capabilities. By leveraging artificial intelligence (AI) and behavioral analytics, Vector can predict and detect anomalies across systems, identifying potential threats before they cause damage. Unlike traditional security methods, threat detection is not reactive but predictive, offering real-time analysis of activities and deviations from normal behavior patterns.
This proactive approach helps companies minimize the mean time to detect (MTTD) threats, enabling them to respond faster and more efficiently. With Vector, organizations can maximize true positives while reducing false positives, ensuring that security teams can focus on genuine risks rather than wasting time on irrelevant alerts.
Advanced Threat Detection and Response
Vector’s Security and Compliance Monitoring (SCM) module goes beyond basic detection with its advanced threat detection and response capabilities. Through User and Entity Behavior Analytics (UEBA), the system tracks the behavior of users and entities within the network, learning from past activities to identify suspicious behavior that may signal a breach. By continuously analyzing patterns and data, the system offers a dynamic and adaptable defense strategy against evolving cyber threats.
Security Orchestration, Automation, and Response (SOAR) further enhances Vector’s capabilities by automating the response process. This automation reduces the mean time to respond (MTTR) by offering guided response recommendations, ensuring swift action when a threat is identified. Automated playbooks allow for a quick and effective resolution to incidents, minimizing damage and disruption to business operations.
Ensuring Compliance and Secure Operations
In addition to threat detection, Vector also emphasizes compliance monitoring and reporting. Companies must maintain compliance with security standards such as ISO 27001 and SOC 2, and Vector ensures that these standards are met by continuously monitoring for any deviations. This proactive approach not only keeps businesses compliant but also identifies areas for improvement, ensuring that security operations are always aligned with best practices.
Vector's SCM module helps manage these compliance requirements by providing automated reports and alerts when potential compliance risks arise. By integrating compliance and security management, organizations can streamline their auditing processes and minimize the risk of penalties due to non-compliance.
Robust Data Protection
With data protection becoming a top priority, Vector provides multiple layers of security to safeguard sensitive information. Data encryption, both at rest and in transit, ensures that confidential information is protected from unauthorized access. Furthermore, access controls, including Role-based Access Control (RBAC) and Multi-factor Authentication (MFA), restrict who can access data, ensuring only authorized personnel have the necessary permissions.
To comply with privacy regulations like GDPR and CCPA, Vector incorporates advanced techniques such as data anonymization and pseudonymization, adding another layer of protection. This comprehensive data security strategy ensures that businesses can maintain confidentiality while adhering to global privacy standards.
Enhancing Network Security
Vector also excels in network security, utilizing robust firewall protocols, intrusion detection systems, and secure transmission methods to protect the network from unauthorized access and attacks. Regular vulnerability assessments ensure that potential weaknesses are identified and rectified before they can be exploited.
With continuous 24/7 monitoring and automated alerts, Vector ensures that organizations can quickly detect and respond to security incidents. Integration with Security Information and Event Management (SIEM) tools enhances its ability to manage incidents and investigate threats, keeping networks safe from malicious activity.
Conclusion
In an era where cyberattacks are a constant threat, leveraging advanced technologies like AI-driven threat detection is essential for safeguarding critical systems and data. Vector, with its SCM module, delivers an all-encompassing security solution that includes advanced threat detection, compliance monitoring, and automated incident response. By integrating AI and behavioral analytics, Vector empowers businesses to stay ahead of threats and maintain a secure digital environment.
From network security to data protection and compliance, Vector’s robust security architecture ensures that organizations are not only protected but also prepared to face the ever-evolving cyber landscape. 
Click here to learn more about Vector’s AI-driven threat detection and how it can protect your business from potential threats.
0 notes
b2bcybersecurity · 1 year
Text
Unzureichende Kollaboration im Unternehmen erhöht Cyber-Risiko
Tumblr media
Mangelhafte interne Kommunikation, unklare Zuständigkeiten und heterogene Tool-Landschaft erschweren das Cyber-Risiko-Management in Unternehmen. Die Ansprüche an ein effektives Risikomanagement der externen Angriffsfläche, die ein Unternehmen über aus dem Internet erreichbare IT-Assets bietet, und die reale Situation klaffen in Unternehmen weit auseinander. Zu diesem Schluss kommt ein vom Analystenhaus Forrester erstellter Thought Leadership Report, der von CyCognito, Marktführer für External Attack Surface Risk Management (EASM), in Auftrag gegeben wurde. Dafür wurden insgesamt 304 Security- und IT-Entscheider in den USA, Deutschland, Frankreich, Großbritannien und Kanada befragt, die unternehmensintern auch für die Risikobewertung verantwortlich sind. Tool-Wildwuchs und mangelnde Kollaboration erhöhen Risiko Größte Hürden für ein effektives Management sind demnach unzureichende Kommunikation, eine heterogene Tool-Landschaft, unklare Zuständigkeiten sowie ineffektive Methoden zur Priorisierung von Risiken – und damit vor allem Herausforderungen im Hinblick auf eine funktionierende Kollaboration. Abhilfe schaffen zentral genutzte Tools für eine rasche Erkennung (Mean Time to Detection – MTTD), die eine schnellere durchschnittliche Behebungszeiten (MTTR) ermöglichen, und eine Single Source of Truth als einheitliche Informationsgrundlage. Unentdeckte Sicherheitslücken in über das Internet erreichbaren Assets, beispielsweise unsicher konfigurierte Cloud-Lösungen, Datenbanken, IoT-Devices und Co., bergen ein enormes Risiko für die IT-Sicherheit von Unternehmen. Gleichzeitig entsprechen aktuelle Risikomanagementpraktiken für das Identifizieren, Priorisieren und Beheben dieser Schwachstellen selten den Erwartungen der Verantwortlichen. Obwohl 81 Prozent der Befragten Sicherheitstests, -prozesse oder -übungen zur Aufdeckung von Schwachstellen in Sicherheitskontrollen und -mechanismen als wichtiges Instrument des Risikomanagements einstufen, wurden bei 53 Prozent im Zuge der letzten Risikobewertung eine beträchtliche Anzahl unentdeckter externer Assets gefunden. Viele nutzen mehr als zehn unterschiedliche Tools Diese Diskrepanz liegt gemäß Forrester vor allem an unzureichender interner Zusammenarbeit – ein Umstand, der sich anhand mehrerer Ergebnisse zeigt. Ein Indikator ist die Heterogenität der Tool-Landschaft: Fast 40 Prozent der teilnehmenden Unternehmen nutzen mehr als zehn verschiedene Tools, die sich über mehrere Teams verteilen und unabhängig voneinander zum Einsatz kommen, statt die Erkenntnisse allen Beteiligten zur Verfügung zu stellen. Diese „Silos“ erschweren die nötige Kommunikation und Kollaboration. Nur 22 Prozent der Befragten haben ein bereichsübergreifendes Team, das für eine effektive Priorisierung von Gegenmaßnahmen zuständig ist. Das führt dazu, dass es in einem von vier befragten Unternehmen mehrere Wochen oder sogar länger dauert, auf neue, mitunter hohe Risiken zu reagieren. Generell bewerten 40 Prozent der Befragten die Beziehungen der involvierten Teams für Security, IT und Business untereinander als durchgängig negativ. Zentrale Automatisierungstools und Single Source of Truth schaffen Abhilfe Um das Risiko von Sicherheitslücken in externen Assets durch eine schnelle Erkennung, Priorisierung und Behebung effektiv senken zu können, sollten Unternehmen laut dem Report zwei Maßnahmen ergreifen. Erstens sollte für das Erfassen und die Bewertung von Risiken eine unternehmensweite Single Source of Truth existieren, also eine einzige Informationsquelle, die von allen Beteiligten genutzt und permanent auf dem neuesten Stand gehalten wird. Die dafür nötige Zusammenarbeit verbessert außerdem die Stimmung zwischen den Teams und hat auch einen direkten Einfluss auf die MTTR. Erleichtert wird dieses Ziel durch eine zweite empfohlene Maßnahme: Die Einführung einer zentralen Lösung für die Risikominderung, die wichtige Kernaufgaben automatisiert und kontinuierlich durchführt. Dazu gehört das durchgängige Abbilden von Geschäftsstrukturen, regelmäßige Sicherheitstests, die auch „blinde Flecken“ finden, und das korrekte Zuordnen von Assets. Diese Maßnahmen erlauben eine einheitliche Betrachtung der externen Angriffsfläche, eine Priorisierung und Planung von Gegenmaßnahmen – und damit ein effektives Risikomanagement.   Über CyCognito CyCognito ist Marktführer bei External Attack Surface Risk Management (EASM) und zählt viele Fortune-2000-Unternehmen zu seinen Kunden. Von der CyCognito-Plattform profitieren aber nicht nur große Unternehmen und Konzerne, sondern auch der Mittelstand. Die Plattform erlaubt ein proaktives, kontinuierliches Management der potenziellen Angriffsfläche, die ein Unternehmen über aus dem Internet erreichbare Assets bietet, und hilft, die damit verbundenen Risiken zu steuern und zu minimieren. Passende Artikel zum Thema   Lesen Sie den ganzen Artikel
0 notes
ericvanderburg · 1 year
Text
mean time to detect (MTTD)
http://i.securitythinkingcap.com/SqBwzv
0 notes
dreamtech11 · 2 years
Text
Scaling for Success: How Dream11 Uses Predictive Analytics and Real-time Monitoring to Ensure 100% Uptime During Peak Seasons(Part-2)
Using technology to address some of the most difficult IT industry challenges
Efficiency is enhanced by being observable from a single dashboard.
Since our network is complicated and scattered among microservices, monitoring is essential to ensuring accelerated diagnosis. We can monitor the performance of the DNS, application, infrastructure, and ingress/egress architecture with the use of our monitoring tools. Prior to the round lock, every second counts. If not handled properly, it can take time and effort, especially since many factors need to be taken into account, from networking to applications and business performance metrics. During a fantasy sports competition, our status pages and dashboards assist us in concentrating on the areas that need immediate attention.
Examples include the top Relational Database Service (RDS) depending on the connection formed, the top Application Programming Interface (API) with a response time of greater than 200 ms, or the Central Processing Unit (CPU). On a single dashboard, we have built a bird's-eye view of the whole Dream11 infrastructure. It allows us to resolve problems rapidly and reduce the Mean Time To Detect (MTTD) and Mean Time To Resolution (MTTR). Our monitoring tool can establish connections between logs, network measurements, Cloudwatch metrics, and APM metrics.
Tumblr media
Performance Benchmarking & Testing
Another critical phase of any software's life cycle is performance testing. To identify flaws and create standards for each technical component, we conduct routine chaos and load testing.
All of our new app technologies in our apps contain clever user handling to make sure that even when backend applications perform poorly, the user experience is not adversely affected.
We can immediately detect network issues by using our network monitoring. Checking the TCP retransmits by AZ is a clear indication that this is true. To match the performance of our network, we provide a variety of slice-and-dice choices.
The availability zones, services, ENVs, domains, hosts, IP, VPCs, ports, regions, IP type, etc. may all be filtered, including traffic from local, private, and public IP addresses.
For instance, our APM product offers distributed end-to-end tracing from frontend hardware to databases. Our monitoring tools lets us automatically monitor service dependencies and latency to remove problems for our users to obtain the best experience possible by seamlessly correlating dispersed traces with frontend and backend data. We can solve the issue of delivering visibility to a request's lifecycle across several systems by using distributed tracing.
This is quite helpful for debugging and determining the areas of the programm where the greatest time is spent. We have a service map that examines each service to determine its RED metrics, dependencies, and filtering options for Application Services, Databases, Caches, Lambda Functions, and Custom Scripts. Nearly in real-time, the monitoring agent delivers data to our tool every 10 seconds, and this service map reflects this very instant. The map displays all services in green if no difficulties are found and in red if any are. This information is retrieved from the monitor set up for each service.
0 notes
tacsec · 2 years
Text
Is your security strategy built on the right platform?
Simplifying integration, improving visibility, sharing intelligence, and automating workflows across endpoints- cloud, network, and applications. Security Platforms integrate vendor-specific functionality and third-party functions so that security teams can work more efficiently, faster, and more collaboratively.
In addition to reducing operational costs, security platforms enhance operational efficiency and precision, improve business security, and maintain business continuity.
Gartner talks about the future of security teams with SOAR (Security Orchestration and Automation Response)
According to Gartner; SOAR technologies enable organizations to digest and apply inputs from different sources (primarily SIEM systems). The desired outcome can be achieved by integrating these solutions with other technology and automating them. In addition, there are features for managing cases and incidents, managing threat intelligence, dashboards, and reports, as well as analytics that can be applied across a range of processes.
Security operations activities such as threat detection and response are significantly enhanced with SOAR tools that assist human analysts by providing machine-powered assistance to increase efficiency and consistency.
The CISCO model and the ESOF are based on the modern platform architecture
Just like CISCO, and the ESOF network model consists of three-layer
1. The Core Layer
2. The Distribution Layer
3. The Access Layer
The main advantage of the ESOF network model, is that it helps to design, deploy and maintain scalable, trustworthy, cost-effective internetwork.
Improve Performance: It allows the creation of good performance networks.
Exceptional management & troubleshooting: It allows better network management and sets the origin of network trouble apart.
Enhance Filter/Policy creation and application: It allows a better filter/policy creation application.
Adaptability: It allows the user to efficiently integrate future growth.
Better Redundancy: It provides better redundancy as it has multiple links across multiple devices.
Benefits of Security orchestration, automation, and response (SOAR) platforms
Speedy Detection & Reaction Times: Day by Day, security threats are increasing rapidly. SOAR’s enhancing data context, merging with automation, brings the lower mean time to detect(MTTD) and mean time to respond(MTTR). Hence, SOAR lessens the impact as it detects and responds to threats more speedily.
Better Threat Context: The SOAR platform can provide better context, analysis, and updated threat information by consolidating more data from a broad array of tools and systems.
Uncomplicated Management: SOAR platforms consolidate dashboards from various security systems. Therefore, helping the SecOps and other teams by amalgamating information and data handling, streamlining management, and saving time.
Adaptability: As security event volume grows, automating time-consuming manual processes can become impossible. So, SOAR’s orchestration, automation, and workflows can meet adaptability and demands simply.
Prioritize tasks more effectively: Automating lower-level threats enhances SecOps and SOC teams’ responsibilities, making them more efficient in prioritizing and responding to threats that require human intervention.
Rationalizing Operations: Automating lower-level tasks through standard procedures and playbooks enables SecOps teams to respond to more threats in a shorter period. Additionally, these workflows ensure that standardized remediation efforts are applied across all systems throughout the organization.
Reporting and Alliance: Reporting and analysis on SOAR platforms enable better data management processes and more effective security response efforts for more effective security. In addition to improving communication and collaboration throughout disparate enterprise teams, SOAR platforms have central dashboards that can facilitate information sharing.
Affordable Costs: When security analysts use SOAR tools, they can reduce costs, as opposed to manually operating every threat analysis, detection, and response process.
How ESOF is the choice of SOAR platform for VM
TAC Security’s ESOF products can execute automated tasks between various cybersecurity teams using a single platform. ESOF is a platform based on SOAR (Security Orchestration, automation, and response) technology. SOAR platforms are identical to SIEMs(Security Information and event management) as they can aggregate, correlate and analyze details from different sources.
In addition, the ESOF platform is the choice for a cloud-based, SOAR platform Risk-based Vulnerability  Management Solution. Also, it integrates threat intelligence and automates incident investigation and response workflows based on the manuscript created by the security team.
Under ESOF comes three high-end products:
1. ESOF VMDR: Analyze, evaluate, prioritize, and mitigate all the dominant vulnerabilities and risks across the IT landscape in real-time.
2. ESOF VMP: The ESOF VMP provides data from various organizational vulnerabilities into a risk metric.
3. ESOF AppSec: Unified Vulnerability Management Solution for Detecting and Protecting Web and App Assets.
ESOF is the choice for a cloud-based, SOAR platform Risk-based Vulnerability Management Solution.
Stay Vigilant Download the ESOF products Datasheet to learn more about its products.
0 notes
Link
 A key metric for measuring how well you handle system outages is the Mean Time To Recovery or MTTR. It's basically the time it takes you to restore the system to working conditions. The shorter the MTTR, the faster problems are resolved and the less impact your users would experience and hopefully the more likely they will continue to use your product!
And the first step to resolve any problem is to know that you have a problem. The Mean Time to Discovery (MTTD) measures how quickly you detect problems and you need alerts for this - and lots of them.
Exactly what alerts you need depends on your application and what metrics you are collecting. Managed services such as Lambda, SNS, and SQS report important system metrics to CloudWatch out-of-the-box. So depending on the services you leverage in your architecture, there are some common alerts you should have. And here are some alerts that I always make sure to have.
Lambda (Regional Alerts)
You might have noticed the regional metrics (I know, the dashboard says Account-level even though its own description says it's "in the AWS Region") in the Lambda dashboard page.
Tumblr media
Regional concurrency alert
The regional ConcurrentExecutions is an important metric to alert on. Set the alert threshold to ~80% of your current regional concurrency limit (which starts at 1000 for most regions).
Tumblr media
This way, you will be alerted when your Lambda usage is approaching your current limit so you can ask for a limit raise before functions are throttled.
Regional throttles alert
You may also wish to add alerts to the regional Throttles metric. But this depends on whether or not you're using Reserved Concurrency. Reserved Concurrencylimits how much concurrency a function can use and throttling excess invocations shows that it's doing its job. But those throttling can also trigger your alert with false positives.
Lambda (Per-Function Alerts)
(Note: depending on the function's trigger, some of these alerts might not be applicable.)
Error rate alert
Use CloudWatch metric math to calculate the error rate of a function - i.e. 100 * Errors / MAX([Errors, Invocations]). Align the alert threshold with your Service Level Agreements (SLAs). For example, if your SLA states that 99% of requests should succeed then set the error rate alert to 1%.
Throttles alert
Unless you're using Reserved Concurrency, you probably shouldn't expect the function's invocations to be throttled. So you should have an alert against the Throttles metric.
DeadLetterErrors alert
For async functions with a dead letter queue (DLQ), you should set up an alert against the DeadLetterErrors metric. This tells you when the Lambda service is not able to forward failed events to the configured DLQ.
DestinationDeliveryFailures alert
Similar to above, for functions with Lambda Destinations, you should set up an alert against the DestinationDeliveryFailures metric. This tells you when the Lambda service is not able to forward events to the configured destination.
IteratorAge alert
For functions triggered by Kinesis or DynamoDB streams, the IteratorAge metric tells you the age of the messages they receive. When this metric starts to creep up, it's an indicator that the function is not keeping pace with the rate of new messages and is falling behind. The worst-case scenario is that you will experience data loss since data in the streams are only kept for 24 hours by default. This is why you should set up an alert against the IteratorAge metric so that you can detect and rectify the situation before it gets worse.
How Lumigo Helps
Even if you know what alerts you should have, it still takes a lot of effort to set them up. This is where 3rd-party tools like Lumigo can also add a lot of value. For example, Lumigo enables a number of built-in alerts (using sensible, industry-recognized defaults) for auto-traced functions so you don't have to manually configure them yourself. But you still have the option to disable alerts for individual functions should you choose to.
Here are a few of the alerts that Lumigo offers:
Predictions - when Lambda functions are dangerously closed to resource limits (memory/duration/concurrency, etc.)
Abnormal activity detected - invocations (increase/decrease), errors, cost, etc. See below for an example.
On-demand report of misconfigured resources (missing DLQs, wrong DynamoDB throughput mode, etc.)
Threshold exceeded: memory, errors, cold starts, etc. Lambda's runtime crash
Tumblr media
Furthermore, Lumigo integrates with a number of popular messaging platforms so you can be alerted prompted through your favorite channel.
Tumblr media
Oh, and Lumigo does not charge extra for alerts. You only pay for the traces that you send to Lumigo, and it has a free tier for up to 150,000 traced invocations per month. You can sign up for a free Lumigo account here.
API Gateway
By default, API Gateway aggregates metrics for all its endpoints. For example, you will have one 5xxError metric for the entire API, so when there is a spike in 5xx errors you will have no idea which endpoint was the problem.
You need to Enable Detailed CloudWatch Metrics in the stage settings of your APIs to tell API Gateway to generate method-level metrics. This adds to your CloudWatch cost but without them, you will have a hard time debugging problems that happen in production.
Tumblr media
Once you have per-method metrics handy, you can set up alerts for individual methods.
p90/p95/p99 Latency alert
When it comes to monitoring latency, never use Average. "Average" is just a statistical value, on its own, it's almost meaningless. Until we plot the latency distribution we won't actually understand how our users are experiencing our system. For example, all these plots produce the same average but have a very different distribution of how our users experienced the system.
Tumblr media
Seriously, always use percentiles.
So when you set up latency alerts for individual methods, keep two things in mind:
Use the Latency metric instead of IntegrationLatency. IntegrationLatency measures the response time of the integration target (e.g. Lambda) but doesn't include any overhead that API Gateway adds. When measuring API latency, you should measure the latency as close to the caller as possible.
Use the 90th, 95th, or 99th percentile. Or maybe use all 3, but set different threshold levels for them. For example, p90 Latency at 1 second, p95 Latency at 2 seconds, and p99 Latency at 5 seconds.
4xx rate/5xx rate alert
When you use the Average statistic for API Gateway's 4XXError and 5XXErrormetrics you get the corresponding error rate. Set up alerts against these to alert yourself when you start to see an unexpected number of errors.
SQS
When working with SQS, you should set up alerts against the ApproximateAgeOfOldestMessage metric for an SQS queue. It tells you the age of the oldest message in the queue. When this metric trends upwards, it means your SQS function is not able to keep pace with the rate of new messages.
Step Functions
There are a number of metrics that you should alert on:
ExecutionThrottled
ExecutionsAborted
ExecutionsFailed
ExecutionsTimedOut
They represent the various ways state machine executions would fail. And since Step Functions are often used to model business-critical workflows, I would usually set the alert threshold to 1.
0 notes
quantustecsol · 4 years
Text
New SOC Research Reveals Security Teams Overconfident in Detecting Cyberthreats
New SOC Research Reveals Security Teams Overconfident in Detecting Cyberthreats
Source: Security Magazine New SOC Research Reveals Security Teams Overconfident in Detecting Cyberthreats
A new report that examines the processes and effectiveness of corporate security operations centers (SOCs) reveals that 82% of SOCs are confident in the ability to detect cyberthreats, despite just 22% of frontline workers tracking mean time to detection (MTTD), which helps determine hacker…
View On WordPress
0 notes
skaug · 5 years
Quote
MTTD = Mean Time To Detection, a #ChaosEngineering metric.— Joshua Kerievsky (@JoshuaKerievsky) September 25, 2019
http://twitter.com/JoshuaKerievsky/status/1176988274579009536
0 notes
annadianecass · 7 years
Text
Fujitsu’s Top Five Predictions For Security In 2018
Using CTI to support a back to basics approach
  Cyber Threat Intelligence (CTI) can be defined in many different ways and it can simply be a threat feed. In the coming year, it will be important to use threat intelligence to provide an early warning system to customers and context to threats. In short, by doing the hard work, so customers don’t have to be dependent on the service and level of access, suppliers can actually block threats before they have a chance to do any damage.
  That threat intelligence, in most cases, is simply providing guidance on ‘protecting’ using basic defences such as patch management. It’s challenging in any corporate environment expressing the severity of a vulnerability not only as a technical risk, but also a financial, human and business risk. In a perfect world we would patch all the things, but reality dictates an alternative practical world. More often than not, patching a financial system for a critical vulnerability in Java the day before end of the financial year will not whet many appetites through fear of breaking the system, despite successful pre-production patching.
  Combining vulnerability management with threat intelligence is a great use case for protecting corporate environments. Customers are right to be worried about the next strain of global cyber-security incidents, but with last year’s Petya and Wannacry outbreaks, the malware used an SMB vulnerability for propagation known months earlier that simply needed patching. For example, here at Fujitsu, we actually provided a threat advisory on that patch to CTI customers three months before Petya spread. What’s more, we also provided our CTI customers with a threat advisory of the Apache Struts vulnerability Equifax was exploited with several months earlier. We also observed exploits in the wild for this attack, so there was clearly a high impact.
    Political eggshells
  The line between cyber security and politics is distorted with continued reports of election tampering or breaches of government agencies and departments. Investigations surrounding the US Election will rumble on into 2018 with core concerns around the manipulation of security controls and ‘sleight of hand’. There were reports of similar inferred disruptive activity during the 2017 French election. In recent years, senior members of political parties around the world became all too familiar with concepts such as ‘Phishing’ and ‘Incident Response’. In the case of the Democratic National Committee (DNC), the infamous compromise which Crowdstrike traced back to Russia, the monthly cost of the incident response to remove the attackers from the DNC network was reportedly $50k a month.
  Nation States continue to grow in cyber security expertise with the skill, will and resource to monetise from their endeavours or disrupt their neighbours. Not every threat model needs to protect against adversaries that seek to destabilise a nation, however, with the increasing adoption of digital services and frequent attribution of cyber-attacks to Nation States, it is feasible to suggest attacks against commercial entities to support political objectives will only continue to increase.
    Zero day danger 
  Boutique Zero day sellers such as Zerodium offer significant bounties to researchers, such as the $1.5m offered in 2017 for an iOS exploit. Initiatives by the US Government such as ‘hack the Army’ demonstrate a willingness for researchers to find exploits in the US digital services. This is an ethical approach where the US army accepted the risk of vulnerabilities being exploited and more importantly rewarded those who reported them. Shadowbrokers rose to prominence in 2017 as a group who released exploits reportedly stolen from the National Security Agency in the United States. The group released multiple zero day exploits such as ETERNALBLUE and DOUBLEPULSAR that were subsequently weaponised in cyber-attacks such as WannaCry and Adylkuzz.
  The political confirmation of ‘hoarding 0days’ by the US Government was made public in 2017 in the Vulnerability Equities Policy (VEP). This policy essentially means the government can choose to withhold a disclosure if it believes it is in the interests of safety and security. Fortunately, for the major attacks observed in 2017, patches were available for the numerous vulnerabilities that were exploited allowing an element of protection or mitigation against significant damage. The alternative landscape where boutique sellers or Government agencies are compromised for their hoarded zero day exploits is unthinkable, particularly where there is no known patch or protection.
    Effective Security Monitoring
  As data and our digital lives continue to grow and connect, there is an expanded internet with increasingly blurred lines for network perimeters. A consequence of this is more data to manage and an increase in cyber-attacks to detect and analyse.
  A fundamental prerequisite for any business is security monitoring, however, in order to address and keep pace with the continued rise in cyber-attacks; organisations must continue to be innovative in order for the monitoring to remain effective. The threat landscape continues to grow in velocity and complexity and Security Operations Centres (SOC’s) are finding it difficult to keep up with the range of attacks facing modern day businesses. Traditional technologies using a manual approach are no longer sufficient and a fresh, proactive approach is required to counter the modern day cyber criminals.
  These include analytical services such as User Entity & Behaviour Analytics (UEBA), Endpoint Detection & Response (EDR) and Managed Detection & Response (MDR) in a strong advanced threat eco-system. Blended approaches of human analytical skills underpinned by security automation and orchestration (SAO) will be necessary to address real issues facing SOCs such as alarm fatigue.
  A future is certain where SOCs leverage Artificial Intelligence, machine learning and an API, playbook driven model for effective security monitoring. Automated threat intelligence enrichment for incidents freeing up valuable analyst time will be necessary as the industry faces up to the increasing cyber skills gap.
    Incident response metrics for the win
  Whilst a service-level agreement (SLA) will always be the measuring stick for any delivered services, organisations will increasingly adopt new metrics to measure incident response.
  UK Government guidance defines the ability to react to cyber-attacks as ‘an effective response to an attack depends upon first being aware than an attack has happened or is taking place. A swift response is essential to stop the attack, and to respond and minimise the impact or damage caused’.
  An incident response metric that will be gradually adopted and used for this is Mean Time to Respond (MTTR) in order to see demonstrable reductions over time.
  Incident Response and, more importantly, how fast organisations can respond to incidents will be increasingly important with the looming General Data Protection Regulation (GDPR) and Network & Information Systems (NIS) legislations. A notifiable breach has to be reported to the Information Commissioners Office (ICO) within 72 hours so reducing the Mean Time to Respond (MTTR) will become critical and businesses must have a robust Incident Response plan.
  Mean Time to Dwell (MTTD) is a term used to describe the number of days an attacker is inside a network before being detected. A study by FireEye across EMEIA organisations found the average MTTD was 489 days. This adds weight to the view that the current approach of traditional SOC’s is not as effective as it could be. Attackers are continually finding innovative methods of attacking and exfiltrating data through network layers and the security industry must continue to be innovative to reduce the overall MTTD.
The post Fujitsu’s Top Five Predictions For Security In 2018 appeared first on IT SECURITY GURU.
from Fujitsu’s Top Five Predictions For Security In 2018
0 notes
dreamtech11 · 2 years
Text
Scaling for Success: How Dream11 Uses Predictive Analytics and Real-time Monitoring to Ensure 100% Uptime During Peak Seasons(Part-1)
At Dream11, we strive to offer the best sports engagement experience for our users. As the largest fantasy sports platform with 150 million fans participating in over 10,000 contests, it can be difficult to predict traffic patterns, especially during high-demand events like the IPL and World Cup. To maintain 100% uptime during these matches, we use prediction-based scaling and scale-out infrastructure.
We generate an immense amount of data from our mobile applications and microservices, capturing user action events, system metrics, network metrics, and more. Our machine learning algorithms process this data to predict demand based on factors such as players, tournaments, virality, and user playing patterns. During the T20 ICC World Cup, for example, our platform is capable of managing up to 6.21 million concurrent users at the edge layer.
At Dream11, our Dreamsters play a crucial role in ensuring an optimal user experience. Our service owners have readiness lists and runbooks in place to quickly resolve any incidents, and our customer service team is equipped to handle incidents efficiently.
The observability of our network through a single dashboard helps us stay efficient, and monitoring through monitoring tools is critical for accelerated troubleshooting. Our monitoring tools track the performance of our infrastructure, application, and DNS, and our status pages and dashboards allow us to quickly identify and address issues. Our unified dashboard provides a bird's eye view of the entire Dream11 infrastructure, reducing Mean Time To Detect (MTTD) and Mean Time To Resolution (MTTR).
Overall, it's evident that Dream11 is using technology to tackle some of the biggest IT industry challenges, and it's great to see the emphasis on delivering an exceptional user experience.
0 notes