#Data Guard Broker Configuration
Explore tagged Tumblr posts
ocptechnology · 2 years ago
Text
How to setup Data Guard Broker Configuration
Setting up Data Guard Broker Configuration involves the following steps. Before DG Broker setup you must have a Data Guard setup Read How to switchover in Data Guard Step 1. Current status of Broker Check the current status of DG Broker on both sides: Primary Side: SQL> show parameter DG_BROKER_START NAME TYPE VALUE ------------------------------------ ----------- --------- dg_broker_start…
Tumblr media
View On WordPress
0 notes
ericvanderburg · 5 years ago
Text
Creating Oracle 19c Physical Standby & Configuring Data Guard Broker with Ansible [GITHUB]
http://i.securitythinkingcap.com/RMKkBl
0 notes
powerfulbox · 5 years ago
Text
Windows Server 2019 Available at Powerful Box
New Post has been published on https://www.powerfulbox.co.uk/blog/2019/12/31/windows-server-2019-available-at-powerful-box/
Windows Server 2019 Available at Powerful Box
Tumblr media
Microsoft Windows Server 2019 is now available at Powerful Box and comes included with any SSD VPS.
Tumblr media
Microsoft Windows Server 2019 is the latest iteration of Microsoft’s venerable operating system (OS), and it brings to the table a laundry list of new and improved capabilities and features. This release of Windows Server should especially appeal to IT professionals because of the huge number of functional scenarios it can address. While Windows Server isn’t seen on customer premises as much anymore, it’s still the most popular server OS, and that’s across both on-premises data centers as well as in public clouds where it’s widely used in Infrastructure-as-a-Service (IaaS) implementations.
New features in Windows Server 2019
System Insights System Insights is a new feature available in Windows Server 2019 that brings local predictive analytics capabilities natively to Windows Server. These predictive capabilities, each backed by a machine-learning model, locally analyze Windows Server system data, such as performance counters and events, providing insight into the functioning of your servers and helping you reduce the operational expenses associated with reactively managing issues in your Windows Server deployments.
Hybrid Cloud Server Core app compatibility feature on demand: The Server Core App Compatibility Feature on Demand is an optional feature package that can be added to Windows Server 2019 Server Core installations. Features on Demand (FODs) are Windows feature packages that can be added at any time. Common features include language resources like handwriting recognition or other features like the .NET Framework (.NetFx3). When Windows 10 or Windows Server needs a new feature, it can request the feature package from Windows Update. This app feature significantly improves the app compatibility of the Windows Server Core installation option by including a subset of binaries and packages from Windows Server with Desktop Experience, without adding the Windows Server Desktop Experience graphical environment
Improvements in the area of Security i. Windows Defender Advanced Threat Protection (ATP) ATP’s deep platform sensors and response actions expose memory and kernel level attacks and respond by suppressing malicious files and terminating malicious processes. Windows Defender ATP Exploit Guard is a new set of host-intrusion prevention capabilities. The four components of Windows Defender Exploit Guard are designed to lock down the device against a wide variety of attack vectors and block behaviors commonly used in malware attacks. ii. Security with Software Defined Networking (SDN) These security enhancements are integrated into the comprehensive SDN platform introduced in Windows Server 2016. They include: Encrypted networks Firewall auditing Virtual network peering Egress metering iii. Shielded Virtual Machines improvements Linux support If you run mixed-OS environments, Windows Server 2019 now supports running Ubuntu, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server inside shielded virtual machines Troubleshooting improvements Troubleshooting shielded virual machines has been made easier by VMConnect Enhanced Session Mode and PowerShell Direct. These features do not need to be configured, and they become available automatically when a shielded VM is placed on a Hyper-V host running Windows Server version 1803 or later. iv. HTTP/2 for a faster and safer Web Improved coalescing of connections to deliver an uninterrupted and properly encrypted browsing experience. Upgraded HTTP/2’s server-side cipher suite negotiation for automatic mitigation of connection failures and ease of deployment. Changed our default TCP congestion provider to Cubic to give you more throughput!
Storage This release of Windows Server adds the following storage changes and technologies.
i. Manage storage with Windows Admin Center ii. Storage Migration Service iii. Storage Spaces Direct (Windows Server 2019 only) • Deduplication and compression for ReFS volumes • Native support for persistent memory • Nested resiliency for two-node hyper-converged infrastructure at the edge • Two-server clusters using a USB flash drive as a witness • Windows Admin Center • Performance history • Scale up to 4 PB per cluster • Mirror-accelerated parity is 2X faster • Drive latency outlier detection • Manually delimit the allocation of volumes to increase fault tolerance iv. Storage Replica v. File Server Resource Manager
Windows Server 2019 includes the ability to prevent the File Server Resource Manager service from creating a change journal (also known as a USN journal) on all volumes when the service starts. vi. SMB • SMB1 and guest authentication removal • SMB2/SMB3 security and compatibility
vii. Data Deduplication • Data Deduplication now supports ReFS • DataPort API for optimized ingress/egress to deduplicated volumes
Application Platform i. Linux containers on Windows It is now possible to run Windows and Linux-based containers on the same container host, using the same docker daemon. ii. Building Support for Kubernetes Windows Server 2019 continues the improvements to compute, networking and storage from the semi-annual channel releases needed to support Kubernetes on Windows. iii. Container improvements • Improved integrated identity • Better application compatibility • Reduced size and higher performance • Management experience using Windows Admin Center (preview) iv. Low Extra Delay Background Transport Low Extra Delay Background Transport (LEDBAT) is a latency optimized, network congestion control provider designed to automatically yield bandwidth to users and applications. v. High performance SDN gateways This greatly improves the performance for IPsec and GRE connections vi. Persistent Memory support for Hyper-V VMs This can help to drastically reduce database transaction latency or reduce recovery times for low latency in-memory databases on failure vii. Windows Time Service viii. Network performance improvements for virtual workloads New features include: Receive Segment Coalescing in the vSwitch Dynamic Virtual Machine Multi-Queue (d.VMMQ)
Removed Features in Windows Server 2019 • Business Scanning, also called Distributed Scan Management(DSM) • Internet Storage Name Service (iSNS)\ • Print components – now optional component for Server Coreinstallations • Remote Desktop Connection Broker and Remote Desktop Virtualization Host in a Server Core installation • These RDS roles are no longer available for use in a Server Core installation. If youneed to deploy these roles as part of your Remote Desktopinfrastructure, you can install them on Windows Server withDesktop Experience.
Deprecated Features in Windows Server 2019 Features no longer being developed by the team are: • Key Storage Drive in Hyper-V • Trusted Platform Module (TPM) management console • Host Guardian Service Active Directory attestation mode • Remote Differential Compression API support • OneSync service
0 notes
rs3gold2 · 5 years ago
Text
Happy Easter with Enjoying RSorder $18 Coupons for RuneScape Gold for Sale until Apr.19
How to Choose and Configure an Account Continue or Finish Your
cheap runescape gold
Application. Home Home Trader or Investor Small Business Friends and Family Advisor Family Office Advisor Money Manager Introducing Broker Proprietary Trading Group Hedge or Mutual Fund Compliance Officer Administrator Incentive Plan Administrator Educator Referrer. Overview Commissions Interest and Financing Research, News and Market Data Required Minimums Other Fees Advisor Fees Broker Client Markups. IB Feature Explorer Browse all the advantages of an IB account.    
It might be writing about writing (which is what this piece is about). It might be about a sunset, or fresh peaches, or NaturesGuy observations, or romance, or relationships, or child rearing, or travel, or the scent of a wildflower bloom on a Georgia Civil War battlefield. It doesn matter what it is that I write about. I just need to write about observations and thoughts and things that bring me joy and happiness and moments. I become wealthy from my writing? Will I get a job from my writing? Will thousands of people suddenly my posts and click through and finance our dream trip to It THAT DOES NOT MATTER! What matters is that I following my passion, doing what I supposed to do, writing about observations I make and thoughts I have and my feelings and my conversations with God.
One common 60 second binary options strategy is to follow the market news closely. When you find a particular update which is bound to influence an asset positively or negatively trade that asset. You will not be able to win all the trades. Since the asset will move in a particular direction overall, you will be able to win more trades than you lose and you should make a profit out of it.
Yes, if you are thinking about its effective usage then you need not worry as the use of the spy cheating playing card devices is the best ever playing cards tricks to win the game without wasting your time. As you know that some special skills and some sort of intelligence smartness are also required to win your cards game in order to make the unlimited amounts of money. You can also represent your smartness by adopting such best ways to win a poker game. Yes, you can easily win your cards game with the help of some essential playing cards tricks to win. Numerous playing cards tricks are there among which you need to select or choose the most effective and helpful one by which you can easily win your all cards games to make the huge bucks of money.
Mineral resources wodgina lithium mineIllegal sand mining is rampant along the Nai River in Nai Province, causing erosion and threatening people's lives. VNA/VNS Photo V Kh HCM CITY A high ranking city official has demanded punishment illegal sand mining activities, saying the violation must be considered a crime. Nguy Th Phong, chairman of the HCM City Committee, has urged local closely with agencies in neighbouring provinces to prevent further Phong spoke a meeting held last Friday to seek solutions to fight sand mining C Gi waters the city bordering provinces. Illegal sand mining near Gi has reached an level, he said. Le Van Thang, who used to be a foreman at Bong Mieu Gold Mining Co., Ltd. and now works for a local company as a security guard at the Bong Mieu mine is sadly looking at the rusting, empty facilities, while saying: Mieu went bankrupt and the investor has left, but their disastrous mess remains. Life has not returned to normal. The Bong Mieu mine was the most modern gold facility in the country when it was opened in the central province of Quang Nam in 2005 amidst high sounding promises of enriching local state [Read more.] about Lessons gleaned from failing mining ventures
Happy Easter! Welcome to join in RSorder Happy Easter 2020 event to enjoy up to $18 coupons for RuneScape gold, OSRS gold and other products from Apr.11 to Apr.19, 2020.
Four cash coupon codes offered: $3 off code "RET3" for $50+ orders. $7 off code "RET7" for $100+ orders. $12 off code "RET12" for $150+ orders. $18 off code "RET18" for $200+ orders.
Besides, 5% off code "RSYK5" is also offered for Runescape 3 Gold / Osrs gold and all other products. Buy from https://www.rsorder.com/rs-gold at anytime.
0 notes
print-planr · 5 years ago
Text
PRINTPLANR PRINT MIS — QUICKBOOKS INTEGRATION
PrintPLANR QuickBooks integration allows you to fully synchronize your invoices with QuickBooks accounting software.
PrintPLANR is a cloud-based Print MIS solution that manages various print functions within your print business like CRM, quote manager, job manager, etc., to help you get more organized and functional. The solution when integrated with QuickBooks accounting software, streamlines your entire workflow and looks after all the accounting aspects.
QuickBooks Integration
QuickBooks is an easy-to-use cloud accounting software that is mainly designed to relieve business owners from the stress of managing their own financial data. QuickBooks works incredibly well with any business type as it allows them to scale as they grow. Added to this, QuickBooks is considered as the most popularly used accounting software throughout the globe due to its advanced feature list and compatibility.
PrintPLANR and QuickBooks integration will reduce the burden of your business by automating invoice creation and purchase orders.
QuickBooks accounting software allows you to create customed invoices and purchase orders in real-time. Also, the solution syncs your devices, so you can work anytime and from anywhere.
Here’s what you can do with the accounting software:
Access and manage your financial data from the cloud.
Automatic backup of invoices.
Create invoices, purchase orders, purchase receipts, bills, etc.
Spend the least time on manual data-entry.
Secure access to accounting data.
The integration syncs the following data from PrintPLANR to QuickBooks:
Invoices
Purchase orders
Customer data
Invoice details
Payment details
Expenses
Tax details
How it all works
PrintPLANR can be integrated with QuickBooks Online and QuickBooks Desktop version. However, the integration is more of a desktop setup. Thus, we recommend QuickBooks Online once access is authorized. You can set up a rule while synchronizing PrintPLANR with QuickBooks to send invoices and purchase orders based on a date range i.e., on a daily, weekly or monthly basis. The integration carries out various functions such as invoice approval, invoice creation, report generation and more.
Although the only challenge faced in QuickBooks Desktop version is the mapping of fields from PrintPLANR as it might require mapping of the fields and a manual download of the files and importing the same into the accounting system. You can also use QuickBooks connector to synchronize your printing software with QuickBooks.
Compatibility with other software
Apart from QuickBooks, PrintPLANR’s highly compatible Print MIS software can also be integrated with any of the following accounting software:
Xero
FreshBooks
MYOB
Sage
Fortnox
Highlights of the integration
Invoices: The invoices can be exported automatically or manually to QuickBooks based on filters or status. The customer details and invoice details are synced, therefore any changes made in PrintPLANR’s settings are automatically reflected in QuickBooks accounting software.
Purchase Order: The purchase orders are linked with QuickBooks accounting software the same way as for invoices.
Customer Data: The integration allows you to export your customer data over to QuickBooks and any further changes made in PrintPLANR settings are automatically mirrored in QuickBooks without duplicating the records.
Seamless Integration: QuickBooks accounting software integrates seamlessly with PrintPLANR and create invoices, bills, payments, estimates, purchase orders, sales receipts, etc.
Customization: Our custom integration options enable your customers to sync their values exactly the way they want to.
Supplier Records: Similar to customer info, the supplier’s info like contact and other details are synced with QuickBooks accounting software.
Is the integration secure?
Configuring QuickBooks is easier than ever as the integration comes with enterprise-level security and multiple layers of protection to guard your data from malware. Two-factor authentication is used to authorize your QuickBooks login. Furthermore, PrintPLANR does not save the user credentials of your QuickBooks account.
Get started with PrintPLANR Print MIS — QuickBooks accounting integration today. Feel free to contact us for more details.
Source: https://www.printplanr.com/printplanr-print-mis-quickbooks-integrations/
ABOUT PRINTPLANR
PrintPLANR is a cloud-solution that is created to stand out! Printing businesses, Signage Industries, Print Managers/Brokers as well as Promotional Companies can make use of the versatile solution.
Developed by INFOMAZE ELITE, one of the pioneers in providing business solutions across Web and Mobile Platform, the solution is designed to meet the challenges of print industries in managing the workflow, order fulfillment & payment processes.
Contact Info.
Website: https://www.printplanr.com/
Email ID: [email protected] No: +91 821 234 0437
0 notes
terabitweb · 5 years ago
Text
Original Post from Microsoft Secure Author: Todd VanderArk
Legacy infrastructure. Bolted-on security solutions. Application sprawl. Multi-cloud environments. Company data stored across devices and apps. IT and security resource constraints. Uncertainty of where and when the next attack or leak will come, including from the inside. These are just a few of the things that keep our customers up at night.
When security is only as strong as your weakest link and your environments continue to expand, there’s little room for error. The challenge is real: in this incredibly complex world, you must prevent every attack, every time. Attackers must only land their exploit once. They have the upper hand. To get that control back, we must pair the power of your defenders and human intuition with artificial intelligence (AI) and machine learning that help cut through the noise, prioritize the work, and help you protect, detect, and respond smarter and faster.
Microsoft Threat Protection brings this level of control and security to the modern workplace by analyzing signal intelligence across identities, endpoints, data, cloud applications, and infrastructure.
Today, at the Microsoft Ignite Conference in Orlando, Florida, I’m thrilled to share the significant progress we’re making on delivering endpoint security from Microsoft, not just for Microsoft. The Microsoft Intelligent Security Association (MISA), formed just last year, has already grown to more than 80 members and climbing! These partnerships along with the invaluable feedback we get from our customers have positioned us as leaders in recent analyst reports, including Gartner’s Endpoint Protection Platform Magic Quadrant, Gartner’s Cloud Access Security Broker (CASB) Magic Quadrant and Forrester’s Endpoint Security Suites Wave and more.
As we continue to focus on delivering security innovation for our customers, we are:
Reducing the noise with Azure Sentinel—Generally available now, our cloud-native SIEM, Azure Sentinel, enables customers to proactively hunt for threats using the latest queries, see connections between threats with the investigation graph, and automate incident remediation with playbooks.
Discovering and controlling Shadow IT with Microsoft Cloud App Security and Microsoft Defender Advanced Threat Protection (ATP)—With a single click, you can discover cloud apps, detect and block risky apps, and coach users.
Enhancing hardware security with our partners—We worked across our partner ecosystem to offer stronger protections built into hardware with Secured-core PCs, available now and this holiday season.
Offering Application Guard container protection, coming to Office 365—In limited preview now, we will extend the same protections available in Edge today to Office 365.
Building automation into Office 365 Advanced Threat Protection for more proactive protection and increased visibility into the email attacker kill chain—We’re giving SecOps teams increased visibility into the attacker kill chain to better stop the spread of attacks by amplifying your ability to detect breaches through new enhanced compromise detection and response in Office 365 ATP, in public preview now. And later this year, we’re adding campaign views to allow security teams to see the full phish campaign and derive key insights for further protection and hunting.
Getting a little help from your friends—Sometimes you need another set of eyes, sometimes you need more advanced investigators. Available now, with the new experts on demand service, you can extend the capabilities of your security operations center (SOC) with additional help through Microsoft Defender ATP.
Improving your Secure Score—Back up the strength of your team with numbers. New enhancements in Secure Score will make it easier for you to understand, benchmark, and track your progress. We also added new planning capabilities that help you set goals and predict score improvements, and new CISO Metrics & Trends reports that show the impact your work is having on the health of your organization in real-time.
Taking another step in cross-platform protection—This month, we’re expanding our promise to offer protections beyond Windows with Enterprise Detection and Response for Apple Macs and Threat and Vulnerability Management for servers.
Microsoft Ignite
Join us online November 4–8, 2019 to livestream keynotes, watch selected sessions on-demand, and more.
Learn more
There’s no way one person, or even one team, no matter how large could tackle this volume of alerts on a daily basis. The Microsoft Intelligent Security Graph, the foundation for our security solutions, processes 8.2 trillion signals every day. We ground our solutions in this intelligence and build in protections through automation that’s delivered through our cloud-powered solutions, evolving as the threat landscape does. Only this combination will enable us to take back control and deliver on a Zero Trust network with more intelligent proactive protection.
Here’s a bit more about some of the solutions shared above:
Discovering and controlling cloud apps natively on your endpoints
As the volume of cloud applications continues to grow, security and IT departments need more visibility and control to prevent Shadow IT. At last year’s Ignite, we announced the native integration of Microsoft Cloud App Security and Microsoft Defender ATP, which enables our Cloud Access Security Broker (CASB) to leverage the traffic information collected by the endpoint, regardless of the network from which users are accessing their cloud apps. This seamless integration gives security admins a complete view of cloud application and services usage in their organization.
At this year’s Ignite, we’re extending this capability, now in preview, with native access controls based on Microsoft Defender ATP network protection that allows you to block access to risky and non-complaint cloud apps. We also added the ability to coach users who attempt to access restricted apps and provide guidance on how to use cloud apps securely.
Building stronger protections starting with hardware
As we continue to build in stronger protections at the operating system level, we’ve seen attackers shift their techniques to focus on firmware—a near 5x increase in the last three years. That’s why we worked across our vast silicon and first- and third-party PC manufacturing partner ecosystem to build in stronger protections at the hardware level in what we call Secured-core PCs to protect against these kind of targeted attacks. Secured-core PCs combine identity, virtualization, operating system, hardware, and firmware protection to add another layer of security underneath the operating system.
Application Guard container protections coming to Office 365
Secured-core PCs deliver on the Zero Trust model, and we want to further build on those concepts of isolation and minimizing trust. That’s why I’m thrilled to share that the same hardware-level containerization we brought to the browser with Application Guard integrated with Microsoft Edge will be available for Office 365.
This year at Ignite, we are providing an early view of Application Guard capabilities integrated with Office 365 ProPlus. You will be able to open an untrusted Word, Excel, or PowerPoint file in a virtualized container. View, print, edit, and save changes to untrusted Office documents—all while benefiting from that same hardware-level security. If the untrusted file is malicious, the attack is contained and the host machine untouched. A new container is created every time you log in, providing a clean start as well as peace of mind.
When you want to consider the document “trusted,” files are automatically checked against the Microsoft Defender ATP threat cloud before they’re released. This integration with Microsoft Defender ATP provides admins with advanced visibility and response capabilities—providing alerts, logs, confirmation the attack was contained, and visibility into similar threats across the enterprise. To learn more or participate, see the Limited Preview Sign Up.
Automation and impact analysis reinvent Threat and Vulnerability Management
More than two billion vulnerabilities are detected every day by Microsoft Defender ATP and the included Threat and Vulnerability Management capabilities, and we’re adding even more capabilities to this solution.
Going into public preview this month, we have several enhancements, including: vulnerability assessment support for Windows Server 2008R2 and above; integration with Service Now to further improve the communication across IT and security teams; role-based access controls; advanced hunting across vulnerability data; and automated user impact analysis to give you the ability to simulate and test how a configuration change will impact users.
Automation in Office 365 ATP blocked 13.5 billion malicious emails this year
In September, we announced the general availability of Automated Incident Response, a new capability in Office 365 ATP that enables security teams to efficiently detect, investigate, and respond to security alerts. We’re building on that announcement, using the breadth of signals from the Intelligent Security Graph to amplify your ability to detect breaches through new enhanced compromise user detection and response capabilities in Office 365 ATP.
Now in public preview, the solution leverages the insights from mail flow patterns and Office 365 activities to detect impacted users and alert security teams. Automated playbooks then investigate those alerts, look for possible sources of compromise, assess impact, and make recommendations for remediation.
Campaign detections coming to Office 365 ATP
Attackers think in terms of campaigns. They continuously morph their email exploits by changing attributes like sending domains and IP addresses, payloads (URLs and attachments), and email templates attempting to evade detection. With campaign views in Office 365 ATP, you’ll be able to see the entire scope of the campaign targeted at your organization. This includes deep insights into how the protection stack held up against the attack—including where portions of the campaign might have gotten through due to tenant overrides thereby exposing users. This view helps you quickly identify configuration flaws, targeted users, and potentially comprised users to take corrective action and identify training opportunities. Security researchers will be able to use the full list of indicators of compromise involved in the campaign to go hunt further. This capability will be in preview by the end of the year.
Protection across platforms: enterprise detection and response (EDR) for Mac
Work doesn’t happen in just one place. We know that people use a variety of devices and apps from various locations throughout the day, taking business data with them along the way. That means more complexity and a larger attack surface to protect. Microsoft’s Intelligent Security Graph detects five billion threats on devices every month. To strengthen enterprise detection and response (EDR) capabilities for endpoints, we’re adding EDR capabilities to Microsoft Defender ATP for Mac, entering public preview this week. Moving forward, we plan to offer Microsoft Defender ATP for Linux servers, providing additional protection for our customers’ heterogeneous networks.
We understand the pressure defenders are under to keep pace with these evolving threats. We are grateful for the trust you’re putting in Microsoft to help ease the burdens on your teams and help focus your priority work.
Related links
Microsoft announces new innovations in security, compliance, and identity at Ignite
Navigate data protection and risk in the cloud era
Microsoft Intelligent Security Association grows to more than 80 members
What’s new in Azure Active Directory at Microsoft Ignite 2019
FastTrack for Office 365 ATP and Microsoft Defender ATP
The post Further enhancing security from Microsoft, not just for Microsoft appeared first on Microsoft Security.
#gallery-0-6 { margin: auto; } #gallery-0-6 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-6 img { border: 2px solid #cfcfcf; } #gallery-0-6 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Go to Source Author: Todd VanderArk Further enhancing security from Microsoft, not just for Microsoft Original Post from Microsoft Secure Author: Todd VanderArk Legacy infrastructure. Bolted-on security solutions. Application sprawl. Multi-cloud environments.
0 notes
bharatiyamedia-blog · 5 years ago
Text
Wannacry ransomware assault: Trade specialists provide their suggestions for prevention
http://tinyurl.com/y3qwkknt Wannacry stays a big menace for firms. Learn the way your group can guard in opposition to it. WannaCry: One yr later, is the world prepared for an additional main assault? ZDNet’s Danny Palmer study’s the aftermath of WannaCry, Notpetya, and Unhealthy Rabbit. I wrote about the Wannacrypt ransomware attack a few years in the past. Also referred to as Wannacry, this assault concerned a serious Home windows vulnerability, which allowed attackers to entry methods, encrypt knowledge rendering it off-limits and demand a ransom cost to launch mentioned knowledge.  Sadly, Wannacry stays a big menace.  SEE: Windows 10 security: A guide for business leaders (TechRepublic Premium) I spoke with a number of trade safety specialists together with: Andrew Morrison, precept, Deloitte Cyber Risk Services; Dylan Owen, senior supervisor, cyber companies, Raytheon; and Josh Mayfield, director of safety technique, Absolute, to find out the present standing of Wannacry and tricks to guard in opposition to it. Why is Wannacry nonetheless a menace? Scott Matteson: Is Wannacry nonetheless a menace? Andrew Morrison: WannaCry is clearly nonetheless a menace for the big variety of unpatched methods. Unhealthy actors can now simply detect unpatched methods and direct WannaCry at them to conduct focused assaults. This narrative will not be new. Actually, WannaCry used the identical system as NotPetya. The precise toolkit that was used and stolen from the NSA remains to be probably a menace for creating variants of assaults and circumvent assaults. Whereas the patches tackle the toolkit, utilizing it to search out new vulnerabilities remains to be a menace. Customers assume they’re protected as a result of they patched what they noticed, however the menace developed utilizing the identical toolkit, and they are often hit once more. Dylan Owen: To a point it nonetheless is. In keeping with knowledge generated by Shodan, there are greater than 400,000 gadgets within the US which might be nonetheless weak to Wannacry. Manufacturing methods could possibly be significantly in danger, as a lot of these methods run on older variations of Home windows or embedded Home windows. Corporations are cautious to patch these older methods as a result of the method could trigger manufacturing capabilities to halt. SEE: 10 dangerous app vulnerabilities to watch out for (Free PDF) (TechRepublic obtain) How the menace developed Scott Matteson: How has the menace developed? Andrew Morrison: The Wannacry menace has developed to your complete machine. What began as a nation-state assault developed into focused methods. Risk actors are usually not simply appearing in opportunistic methods anymore. Fairly, as demonstrated by WannaCry and NotPetya, they will take toolkits and do reconnaissance. In return, these subsequent assaults shall be tougher to defend in opposition to, making restoration practically inconceivable. Dylan Owen: From malware to crypto-mining code to distributed denial of service (DDoS) assaults, hackers are adept at creating variants to contaminate weak methods. Josh Mayfield: Totally different strains of ransomware proceed to develop, however let’s face it, WannaCry was beta testing. The actual menace comes within the type of ransom that does not even ask for cryptocurrency, however for precise conquest: Give us this useful resource, or we are going to destroy it. Ransom-style cybercrime turns into a way more worthwhile alternative when you seize management of the tens of millions of GPUs world wide who can grow to be your personal private gold-bar goose. For this reason we see “ransom” trying increasingly more like enslavement. This slave-raiding malware will solely advance. Which is extra profitable: Robbing a financial institution or having a cash machine from the Division of Treasury? SEE: Internet and email usage policy (TechRepublic Premium) What must be achieved Scott Matteson: What are firms doing about it? Andrew Morrison: At a excessive stage, WannaCry highlighted the necessity for higher vigilance and hygiene. In different phrases, it taught organizations what must be patched and the way shortly this should be achieved. As a way to keep forward, organizations should conduct audits of their patching processes, then look into instruments and insurance policies to make the observe simpler. A great instance of that is the present motion in direction of stronger automation in patching. The second piece is restoration. Organizations are attempting to arrange methods, knowledge, and enterprise processes to resist assaults via air-gapped restoration options so there may be an entry level that’s sanitized and cleaned. From there, the subsequent entry opens and property will be saved. This ensures vulnerabilities and malware are unable to propagate there as a result of the community connection is eliminated. Moreover, this enables a spot for crucial knowledge to reside and be used to deliver methods again. Eradicating crucial property to offline chilly storage is one thing extra organizations are doing, and one thing Deloitte Cyber encourages to ascertain immunity-based protection to restoration. This strategy is far more cost effective than paying ransom to get knowledge again as a result of the group owns it. Dylan Owen: We will count on to see an increase in focused assaults in opposition to methods which might be troublesome to patch, like air-gapped or industrial management methods. Because the assaults grow to be extra subtle, so ought to our protection methods. Corporations should proactively patch their weak methods. Nonetheless, if a system can’t be patched, firms ought to isolate the vulnerability behind a firewall. Since assaults like WannaCry use port 445 to establish vulnerabilities, firms ought to block its visibility from the web. If the port is not routable, then malicious actors can have a tough time understanding who to focus on. Lastly, whereas this might not be potential for all firms, they need to look to improve and change weak Home windows methods with newer, protected variations. Josh Mayfield: Corporations are following the usual narrative: Hiring consultants, implementing a couple of adjustments, shopping for a bunch of safety instruments, and crossing fingers. IT complexity has grow to be so extreme that we simply cannot see although the densely packed tangle to pinpoint weaknesses. And after we do discover weaknesses, we are sometimes conflating “hole” with “no safety product.” So we buy groceries, by no means realizing that adjustments to our current instruments (e.g. making them resilient) would enhance their odds of success from artistic and motivated criminals. SEE: Launching a career in cybersecurity: An insider’s guide (free PDF) (TechRepublic Premium) Greatest practices Scott Matteson: What greatest practices ought to IT departments comply with? Dylan Owen: Be proactive. IT departments ought to constantly monitor for vulnerabilities and develop a vulnerability administration program to ascertain a transparent course of for addressing threats. Particularly, the IT staff ought to change outdated Home windows methods and again up crucial methods to make sure that stolen or tampered information will be recovered. Moreover, the staff should take a look at to make sure that data will be recovered within the occasion of an assault. Testing back-up methods is usually a missed step, but it is essential in figuring out an organization’s functionality to rebound after an assault. Josh Mayfield: It’s prudent for IT departments to concentrate on resilience. In keeping with Gartner, world spending on data safety is predicted to exceed $124 billion in 2019, but we’re nonetheless witnessing vital breaches in in the present day’s safety panorama—additional proving that complexity is a transparent and current rival of cybersecurity. Most organizations have threat profiles and commitments with their distributors, particularly these dealing with PHI as a third-party. But, while you multiply the variety of connections, knowledge flows, EDIs, and different exchanges, one thing is certain to be uncared for within the Gordian knot.  With out understanding the place to look, it is inconceivable to establish the finer associations (knowledge schemes), and because of this, relationships involving entry management, and authorization/authentication grow to be anybody’s greatest guess. Visibility is vital. However then what? You will in all probability discover—together with your new unimpeded view—a graveyard of damaged, disabled, and failing brokers and controls.  How does one keep resilient when the expertise can’t stand up to the slightest perturbation on the gadget? By persisting the crucial controls essential to ship a resilient setting. To edge towards resilience, we should be sure that somebody watches the watchers. We should elevate to an Olympian vantage level to survey every management’s effectiveness and its capability to remain alive. Safety is much from a snapshot of right configurations, it’s the maniacal pursuit of resilience, bouncing again from damage and being armed with controls and brokers boasting of their immortality. That is what persistence brings, an unmistakable path to resilience. Cybersecurity Insider E-newsletter Strengthen your group’s IT safety defenses by holding abreast of the newest cybersecurity information, options, and greatest practices. Delivered Tuesdays and Thursdays Join in the present day Join in the present day Additionally see Picture: Getty Pictures/iStockphoto Source link
0 notes
ocptechnology · 2 years ago
Text
Oracle Dataguard Switchover with broker
To perform an Oracle Data Guard switchover with broker, you can follow these general steps: Prepare for the switchover Before starting the switchover process, make sure that both the primary and standby databases are synchronized and that all required logs have been applied to the standby database. Also, ensure that the broker is configured and running on both the primary and standby…
Tumblr media
View On WordPress
0 notes
sandeep2363 · 6 years ago
Text
Data guard broker(DGMGRL) steps during upgrade of Database
Data guard broker(DGMGRL) steps during upgrade of Database
Data guard broker(DGMGRL) steps during upgrade of Database
Following are the steps used to disable or enable the Dataguard Broker service during upgrade process of Oracle Database:
1. Disable the fast start failover. DGMGRL> DISABLE FAST_START FAILOVER; 2. Shutdown the database PROD. SQL> SHUTDOWN IMMEDIATE; 3. Disable the configuration of Data guard broker. DGMGRL> DISABLE CONFIGURATION; 4. Stop…
View On WordPress
0 notes
iyarpage · 7 years ago
Text
Validating Topic Configurations in Apache Kafka
Messages in Apache Kafka are appended to (partitions of) a topic. Topics have a partition count, a replication factor and various other configuration values. Why do those matter and what could possibly go wrong?
Why does Kafka topic configuration matter?
There are three main parts that define the configuration of a Kafka topic:
Partition count
Replication factor
Technical configuration
The partition count defines the level of parallelism of the topic. For example, a partition count of 50 means that up to 50 consumer instances in a consumer group can process messages in parallel. The replication factor specifes how many copies of a partition are held in the cluster to enable failover in case of broker failure. And in the technical configuration, one can define the cleanup policy (deletion or log compaction), flushing of data to disk, maximum message size, permitting unclean leader elections and so on. For a complete list, see http://ift.tt/2Be66Ia. Some of these properties are quite easy to change at runtime. For others this is a lot harder, though.
Let’s take the partition count. Increasing it upwards is easy – just run
bin/kafka-topics.sh --alter --zookeeper zk:2181 --topic mytopic --partitions 42
This might be sufficient for you. Or it might open the fiery gates of hell and break your application. The latter is the case if you depend on all messages for a given key landing on the same partition (to be handled by the same consumer in a group) or for example if you run a Kafka Streams application. If that application uses joins, the involved topics need to be copartitioned, meaning that they need to have the same partition count (and producers using the same partitioner, but that is hard to enforce). Even without joins, you don’t want messages with the same key end up in different KTables.
Changing the replication factor is serious business. It is not a case of simply saying “please increase the replication factor to x” as it is with the partition count. You need to completely reassign partitions to brokers, specifying the preferred leader and n replicas for each partition. It is your task to distribute those well across your cluster. This is no fun for anyone involved. Practical experience with this has actually led to this blog post. The technical configuration has an impact as well. It could be for example quite essential that a topic is using compaction instead of deletion if an application depends on that. You also might find the retention time too small or too big.
The Evils of Automatic Topic Creation
In a recent project, a central team managed the Kafka cluster. This team kept a lot of default values in the broker configuration. This is mostly sensible as Kafka comes with pretty good defaults. However, one thing they kept was auto.create.topics.enable=true. This property means that whenever a client tries to write to or read from a non-existing topic, Kafka will automatically create it. Defaults for partition count and replication factor were kept at 1.
This led to the situation where the team forgot to set up a new topic manually before running producers and consumers. Kafka created that topic with default configuration. Once this was noticed, all applications were stopped and the topic deleted – only to be created again automatically seconds later, presumably because the team didn’t find all clients. “Ok”, they thought, “let’s fix it manually”. They increased the partition count to 32, only to realize that they had to provide the complete partition assignment map to fix the replication factor. Even with tool support from Kafka Manager, this didn’t give the team members a great feeling. Luckily, this was only a development cluster, so nothing really bad happened. But it was easy to conceive that this could also happen in production as there are no safeguards.
Another danger of automatic topic creation is the sensitivity to typos. Let’s face it – sometimes we all suffer from butterfingers. Even if you took all necessary care to correctly create a topic called “parameters”, you might end up with something like
Automatic topic creating means that your producer thinks everything is fine, and you’ll scratch your head as to why your consumers don’t receive any data.
Another conceivable issue is that a developer that maybe is not yet that familiar with the Producer API might confuse the String parameters in the send method
So while our developer meant to assign a random value to the message key, he accidentally set a random topic name. Every time a message is produced, Kafka creates a new topic.
So why don’t we just switch automatic topic creation off? Well, if you can: do it. Do it now! Sadly, the team didn’t have that option. But an idea was born – what would be the easiest way to at least fail fast at application startup when something is different than expected?
How to automatically check your topic configuration
In older versions of Kafka, we basically used the code called by the kafka-topics.sh script to programmatically work with topics. To create a topic for example we looked at how to use kafka.admin.CreateTopicCommand. This was definitely better than writing straight to Zookeeper because there is no need to replicate the logic of “which ZNode goes where”, but it always felt like a hack. And of course we got a dependency on the Kafka broker in our code – definitely not great.
Kafka 0.11 implemented KIP-117, thus providing a new type of Kafka client – org.apache.kafka.clients.admin.AdminClient. This client enables users to programmatically execute admin tasks without relying on those old internal classes or even Zookeeper – all Zookeeper tasks are executed by brokers.
With AdminClient, it’s fairly easy to query the cluster for the current configuration of a topic. For example, this is the code to find out if a topic exists and what its partition count and replication factor is:
The DescribeTopicsResult contains all the info required to find out if the topic exists and how partition count and replication factor are set. It’s asynchronous, so be prepared to work with Futures to get your info.
Getting configs like cleanup.policy works similarly, but uses a different method:
Under the hood there is the same Future-based mechanism.
A first implementation attempt
If you are in a situation where your application depends on a certain configuration for the Kafka topics you use, it might make sense to fail early when something is not right. You get instant feedback and have a chance to fix the problem. Or you might at least want to emit a warning in your log. In any case, as nice as the AdminClient is, this check is not something you should have to implement yourself in every project.
Thus, the idea for a small library was born. And since naming things is hard, it’s called “Club Topicana”.
With Club Topicana, you can check your topic configuration every time you create a Kafka Producer, Consumer or Streams client.
Expectations can be expressed programmatically or configuratively. Programmatically, it uses a builder:
This basically says “I expect the topic test_topic to exist. It should also have 32 partitions and a replication factor of 3. I also expect the cleanup policy to be delete. Kafka should retain messages for at least 30 seconds.”
Another option to specify an expected configuration is YAML (parser is included):
What do you do with those expectations? The library provides factories for all Kafka clients that mirror their public constructors and additionally expects a collection of expected topic configurations. For example, creating a producer can look like this:
The last line throws a MismatchedTopicConfigException if the actual configuration does not meet expectations. The message of that exception lists the differences. It also provides access to the computed result so users can react to it in any way they want.
The code for consumers and streams clients looks similar. Examples are available on GitHub. If all standard clients are created using Club Topicana, an exception will prevent creation of a client and thus auto creation of a topic. Even if auto creation is disabled, it might be valuable to ensure that topics have the correct configuration.
There is also a Spring client. The @EnableClubTopicana annotation triggers Club Topicana to read YAML configuration and execute the checks. You can configure if you want to just log any mismatches or if you want to let the creation of the application context fail.
This is all on GitHub and available on Maven Central.
Caveats
Club Topicana will not notice when someone changes the configuration of a topic after your application has successfully started. It also of course cannot guard against other clients doing whatever on Kafka.
Summary
The configuration of your Kafka topics is an essential part of running your Kafka applications. Wrong partition count? You might not get the parallelism you need or your streams application might not even start. Wrong replication factor? Data loss is a real possibility. Wrong cleanup policy? You might lose messages that you depend on later. Sometimes, your topics might be auto-generated and come with bad defaults that you have to fix manually. With the AdminClient introduced in Kafka 0.11, it’s simple to write a library that compares actual and desired topic configurations at application startup.
The post Validating Topic Configurations in Apache Kafka appeared first on codecentric AG Blog.
Validating Topic Configurations in Apache Kafka published first on http://ift.tt/2fA8nUr
0 notes
mobilenamic · 7 years ago
Text
Validating Topic Configurations in Apache Kafka
Messages in Apache Kafka are appended to (partitions of) a topic. Topics have a partition count, a replication factor and various other configuration values. Why do those matter and what could possibly go wrong?
Why does Kafka topic configuration matter?
There are three main parts that define the configuration of a Kafka topic:
Partition count
Replication factor
Technical configuration
The partition count defines the level of parallelism of the topic. For example, a partition count of 50 means that up to 50 consumer instances in a consumer group can process messages in parallel. The replication factor specifes how many copies of a partition are held in the cluster to enable failover in case of broker failure. And in the technical configuration, one can define the cleanup policy (deletion or log compaction), flushing of data to disk, maximum message size, permitting unclean leader elections and so on. For a complete list, see http://ift.tt/2Be66Ia. Some of these properties are quite easy to change at runtime. For others this is a lot harder, though.
Let’s take the partition count. Increasing it upwards is easy – just run
bin/kafka-topics.sh --alter --zookeeper zk:2181 --topic mytopic --partitions 42
This might be sufficient for you. Or it might open the fiery gates of hell and break your application. The latter is the case if you depend on all messages for a given key landing on the same partition (to be handled by the same consumer in a group) or for example if you run a Kafka Streams application. If that application uses joins, the involved topics need to be copartitioned, meaning that they need to have the same partition count (and producers using the same partitioner, but that is hard to enforce). Even without joins, you don’t want messages with the same key end up in different KTables.
Changing the replication factor is serious business. It is not a case of simply saying “please increase the replication factor to x” as it is with the partition count. You need to completely reassign partitions to brokers, specifying the preferred leader and n replicas for each partition. It is your task to distribute those well across your cluster. This is no fun for anyone involved. Practical experience with this has actually led to this blog post. The technical configuration has an impact as well. It could be for example quite essential that a topic is using compaction instead of deletion if an application depends on that. You also might find the retention time to small or too big.
The Evils of Automatic Topic Creation
In a recent project, a central team managed the Kafka cluster. This team kept a lot of default values in the broker configuration. This is mostly sensible as Kafka comes with pretty good defaults. However, one thing they kept was auto.create.topics.enable=true. This property means that whenever a client tries to write to or read from a non-existing topic, Kafka will automatically create it. Defaults for partition count and replication factor were kept at 1.
This led to the situation where the team forgot to set up a new topic manually before running producers and consumers. Kafka created that topic with default configuration. Once this was noticed, all applications were stopped and the topic deleted – only to be created again automatically seconds later, presumably because the team didn’t find all clients. “Ok”, they thought, “let’s fix it manually”. They increased the partition count to 32, only to realize that they had to provide the complete partition assignment map to fix the replication factor. Even with tool support from Kafka Manager, this didn’t give the team members a great feeling. Luckily, this was only a development cluster, so nothing really bad happened. But it was easy to conceive that this could also happen in production as there are no safeguards.
Another danger of automatic topic creation is the sensitivity to typos. Let’s face it – sometimes we all suffer from butterfingers. Even if you took all necessary care to correctly create a topic called “parameters”, you might end up with something like
Automatic topic creating means that your producer thinks everything is fine, and you’ll scratch your head as to why your consumers don’t receive any data.
Another conceivable issue is that a developer that maybe is not yet that familiar with the Producer API might confuse the String parameters in the send method
So while our developer meant to assign a random value to the message key, he accidentally set a random topic name. Every time a message is produced, Kafka creates a new topic.
So why don’t we just switch automatic topic creation off? Well, if you can: do it. Do it now! Sadly, the team didn’t have that option. But an idea was born – what would be the easiest way to at least fail fast at application startup when something is different than expect?
How to automatically check your topic configuration
In older versions of Kafka, we basically used the code called by the kafka-topics.sh script to programmatically work with topics. To create a topic for example we looked at how to use kafka.admin.CreateTopicCommand. This was definitely better than writing straight to Zookeeper because there is no need to replicate the logic of “which ZNode goes where”, but it always felt like a hack. And of course we got a dependency on the Kafka broker in our code – definitely not great.
Kafka 0.11 implemented KIP-117, thus providing a new type of Kafka client – org.apache.kafka.clients.admin.AdminClient. This client enables users to programmatically execute admin tasks without relying on those old internal classes or even Zookeeper – all Zookeeper tasks are executed by brokers.
With AdminClient, it’s fairly easy to query the cluster for the current configuration of a topic. For example, this is the code to find out if a topic exists and what its partition count and replication factor is:
The DescribeTopicsResult contains all the info required to find out if the topic exists and how partition count and replication factor are set. It’s asynchronous, so be prepared to work with Futures to get your info.
Getting configs like cleanup.policy works similarly, but uses a different method:
Under the hood, there is the same Future-based mechanism.
A first implementation attempt
If you are in a situation where your application depends on a certain configuration for the Kafka topics you use, it might make sense to fail early when something is not right. You get instant feedback and have a chance to fix the problem. Or you might at least want to emit a warning in your log. In any case, as nice as the AdminClient is, this check is not something you should have to implement yourself in every project.
Thus, the idea for a small library was born. And since naming things is hard, it’s called “Club Topicana”.
With Club Topicana, you can check your topic configuration every time you create a Kafka Producer, Consumer or Streams client.
Expectations can be expressed programmatically or configuratively. Programmatically, it uses a builder:
This basically says “I expect the topic test_topic to exist. It should also have 32 partitions and a replication factor of 3. I also expect the cleanup policy to be delete. Kafka should retain messages for at least 30 seconds.”
Another option to specify an expected configuration is YAML (parser is included):
What do you do with those expectations? The library provides factories for all Kafka clients that mirrors their public constructors and additionally expects a collection of expected topic configurations. For example, creating a producer can look like this:
The last line throws a MismatchedTopicConfigException if the actual configuration does not meet expectations. The message of that exception lists the differences. It also provides access to the computed result so users can react to it in any way they want.
The code for consumers and streams clients looks similar. Examples are available on GitHub. If all standard clients are created using Club Topicana, an exception will prevent creation of a client and thus auto creation of a topic. Even if auto creation is disabled, it might be valuable to ensure that topics have the correct configuration.
There is also a Spring client. The @EnableClubTopicana annotation triggers Club Topicana to read YAML configuration and execute the checks. You can configure if you want to just log any mismatches or if you want to let the creation of the application context fail.
This is all on GitHub and available on Maven Central.
Caveats
Club Topicana will not notice when someone changes the configuration of a topicafter your application has successfully started. It also of course cannot guard against other clients doing whatever on Kafka.
Summary
The configuration of your Kafka topics is an essential part of running your Kafka applications. Wrong partition count? You might not get the parallelism you need or your streams application might not even start. Wrong replication factor? Data loss is a real possibility. Wrong cleanup policy? You might lose messages that you depend on later. Sometimes, your topics might be auto-generated and come with bad defaults that you have to fix manually. With the AdminClient introduced in Kafka 0.11, it’s simple to write a library that compares actual and desired topic configurations at application startup.
The post Validating Topic Configurations in Apache Kafka appeared first on codecentric AG Blog.
Validating Topic Configurations in Apache Kafka published first on http://ift.tt/2vCN0WJ
0 notes
Text
What you need to know about phone encryption
Phone encryption can simply be defined as the whole process of shielding or protecting an electronic gadget such as Phones, Laptops, or perhaps a host from being invaded simply by unauthorized individuals. What this means basically is that in the case of your cell phones, it helps guard the pictures, texts, e-mail and other documents. Encryption stores data in a format that is not readable by computers or someone without authorized use of unlock the data. Fingerprints and also pin rules are some of the tips used to locking mechanism and unlock the encrypted products. The science of encryption, however, transcends just pin program code; it goes further to require diverse pieces of info, one of you are able to by the who owns the device, one other is concealed in the processor chip within the gadget without the understanding of any person. It ought to, however, be noted that irrespective of the type of device you might be using, info formed by third-party apps save information within their own computers, servers which might not be encrypted. Nevertheless, the principles associated with decrypting data preserved on a hosting server are usually not the same as those kept on a cellular phone. Furthermore, the majority of what is done with a phone will be backed up to a server periodically. What this means is that there is a copy of your social network activities saved inside your device and even the servers of each service. The requirement to encrypt phone networks cannot, therefore, be overemphasized Therefore data stored inside of any application on your cell phone that by-passes any form of connection to a server is encrypted and inaccessible from the FBI, Pro's or any other police force agency. For example, if a user of Apple IOS desired to save associates off the machines, all she or he needs to perform is to disable iCloud sync for your respective iphone app in the Configurations. If you choose not to synchronizing your information on your phones, as opposed to making use of your gadget storage, the info can be said being encrypt phone hence it wouldn't be accessible legally enforcement brokers. Furthermore, the risks that a mobile phone user is actually exposed to are quite numerous, this particular threat could affect the functionality of the phone and may also lead to the private information with the user becoming stolen, and therefore the security from the user has to be guaranteed. Additionally, some applications could be virus's therefore their functionality ought to be reduced. It should be mentioned that there are 3 main focuses on for assailants namely; Information: it is a known fact that cellphones serve as banking institutions for storing large volumes of data that may be hypersensitive in nature Identification: phones may very much easily be linked to a certain individual because of their customizable nature AVAILABILITY: this signifies that an attack might restrict a user's use of using their personal devices hence the need for encrypted phone networks for the purpose of protection. Phone Encryption can simply be defined as a way of concealing data in such a way that only those who have the required access can read it. For more details please visit encrypt phone.
0 notes
cryptokingrobiul · 8 years ago
Text
Top 20 Broker
New Post has been published on http://www.top20broker.com/news/positive-market-sentiment-improved-expectation-non-farm-payrolls/
Positive market sentiment on improved expectation of Non-farm Payrolls
Trading sentiment is positive today after strong employment numbers from ADP yesterday afternoon suggest that perhaps we could be in for a strong Non-farm Payrolls. If that were to be the case then it would help to build expectations of the Fed tightening once more. US equity markets soared into new high ground yesterday and this is helping to drive global risk appetite today. However, strong jobs are one thing, but earnings growth seems to be harder to come by and will be the component of the Employment Situation Report that is closely monitored. Earnings growth remains difficult to find upside traction and could again stay around the 2.5%. This would likely limit the longevity of any dollar gains from a strong headline jobs figure. It does not appear yet that the decision of Donald Trump to pull out of the Paris Climate Agreement is impacting on global markets but the condemnation has been widespread as the US opts to go it alone. However it is notable that oil prices are well over a percent lower today despite the positive market sentiment. There are fears that Trump’s move could increase US focus on crude oil drilling.
Wall Street closed strongly around the highs of the day yesterday as the S&P 500 was up +0.8% at 2430. Asian markets were also positive overnight with the Nikkei +1.6% helped by yen weakness. European markets are also reacting strongly to the gains in the US, although the strength early in the session may become limited in front of the payrolls report. In forex there seems to be a broad risk positive bias although the moves seem to be rather stunted so far. The yen is the major underperformer, whilst sterling is again struggling. Gold is $5 lower on the improved sentiment whilst oil has dropped back by 1.5%.
Non-farm Payrolls will dominate the minds of traders today, but the first real data point of the day is UK Construction PMI at 0930BST which is expected to slip back marginally to 52.7 (from 53.1). However the Employment Situation at 1330BST is key as the June FOMC meeting fast approaches. Headline Non-farm Payrolls are expected to drop slightly top 185,000 (from last month’s 211,000) but in the wake of the very strong ADP number yesterday (which was 253,000) this increases the potential for a bullish surprise. Unemployment is expected to stick at 4.4% but keep an eye on the U6 underemployment which fell to 8.6% last month and is closing in on the bull market lows of 2006/2007 between 8.0%/8.5%. The Average Hourly Earnings are expected to be +0.2% for the month which would be around the +2.5% year on year earnings growth seen for April. Aside from the payrolls report there is also the US Trade Balance at 1330BST which is expected to deteriorate to -$46.1bn (from -$43.7 last month).
  Chart of the Day – AUD/USD
In highlighting a strong improvement in the Kiwi a few days ago, it is notable that the Aussie has been a significant underperformer. There is a big top pattern on AUD/USD which completed below $0.7500 in early May that implies a 250 pip downside target towards $0.7250 within the next few months. The recent rally simply unwound the market back to the neckline resistance at $0.7500 and the pullback has been sold into. This comes as the Aussie has started to find significant selling pressure in the past couple of sessions with two consecutive strong bear candles. Yesterday’s candle took the pair to a three week low and broke through two support levels at $0.7415 and then $0.7385. This comes with a concerning deterioration in the momentum indicators with the RSI sharply lower (but also with further downside potential), the Stochastics accelerating lower and now the MACD lines now threatening to cross lower below neutral. The break below $0.7385 has now re-opened the May low at $0.7325. Rallies are now a chance to sell and today’s early rebound looks to be just that. The two support levels breached yesterday now become an area of overhead supply between $0.7385/$0.7415. It was also interesting to see on the hourly chart that an intraday rally failed at $0.7420 and adds to resistance.
EUR/USD
The dollar bulls clawed back a degree of control in the wake of some strong US employment data and this helped to prevent the euro from breaking out above $1.1267 resistance. There are now two key levels to watch on EUR/USD ahead of Non-farm Payrolls today. The resistance at $1.1267 and the long term pivot at $1.1100. On a closing basis, a break higher would open $1.1300 and a continuation towards the medium/longer term target at $1.1350; whereas a corrective fall below $1.1100 would now complete a small top pattern. Technical momentum indicators remain positively configured and imply that corrections remain a chance to sell and there is little suggestion that a dollar rally would gain too much traction in pulling EUR/USD lower. Subsequently any Non-farm Payrolls related dip would likely be seen as a chance to buy once the volatility subsides. The hourly chart shows a near term pivot at $1.1200 above support at $1.1160.
GBP/USD
The market is settling down after a choppy few days driven by varying polls regarding the UK election. However this settling is simply in front of Non-farm Payrolls today which are likely to ramp up the volatility again. The support of the long term neckline around $1.2775 remains key but the momentum indicators are certainly suggesting that there is a concern that this support is under increasing pressure. Where the rising 21 day moving average had been supportive throughout April and May, this is now a basis of resistance at $1.2920 and has capped the last two session highs. Yesterday’s small bodied candle and 85 pip range (average true range is currently 98 pips) suggests a cautious market. The hourly chart shows a market in consolidation mode, however with the Non-farm Payrolls today and more UK polling in the offing in front of next week’s election there could be some significant volatility ahead. A close above $1.2920 re-opens $1.3000 again.
USD/JPY
The dollar bulls made a return yesterday in a move that has significantly helped to improve the outlook once more. A strong bull candle has been followed by early gains today and once more the bulls are testing the medium term pivot around 111.60. This is a choppy period of trading for the pair with a general dollar negative bias, reflected in the drifting lower configuration of the momentum indicators. However the bulls look to now be fighting back again. The resistance begins to increase around 111.60 but if the bulls can breakout (and preferably close above) another previous pivot of 112.20 then the outlook will significantly improve. The hourly chart shows positive configuration on momentum now ahead of the Non-farm Payrolls with initial pivot support around 111.20. A closing break above the mid-May high of 112.10 would complete a small base pattern too.
Gold
The breakout above the pivot of $1261 seems to be creaking now. Three of the four sessions that have come since the market achieved its first closing breakout have all been negative. Despite closing repeatedly above $1261 the market has never managed to break the shackles of this resistance and the pressure is increasing now as today’s early decline shows. The momentum is questionable with the Stochastics crossing lower and the RSI having continuously been unable to push above 60, however as yet there is no suggestion of a significant amount of selling pressure. It is just that the bulls may continue to struggle. The run of higher low supports is intact with $1252.50 and $1247.25. However a close below $1261 would question the bull control today. The resistance is at $1273.75 now. Expect volatility this afternoon with Non-farm Payrolls.
WTI Oil
Despite the bigger than expected inventory drawdown, the oil price ended lower on the day and the technicals remain corrective. This comes with the market today again breaching the neckline support at $48.00 that has guarded from a completing head and shoulders top. A closing breach of $48.00 would imply a further $4 of downside. The momentum indicators remain corrective with the MACD lines crossing lower and the Stochastics in consistent decline. Rallies remain a chance to sell with the resistance band once more the old pivot area $49.60/$50.20 and the right hand shoulder high at $50.28. The hourly chart shows near term resistance around $48.20 with a minor pivot around $49.05. The hourly momentum indicators are now negatively configured with unwinding moves on the hourly RSI failing around 55/60 now. A breach of $47.75 support is also an indication of burgeoning bear control. Next support is $47.00 and then $45.55.
Dow Jones Industrial Average
With the positive medium term outlook on the Dow, corrections are still a chance to buy. The recent drift lower was little other than an unwinding move and the bulls has comes back strongly to burst through the resistance at 21,112 to achieve an all-time high of 21,144 (above the previous high from March at 21,116). Now comes the job of pushing through the intraday all time high of 21,169. The positive intent from the strong bull candle closing at the high of the day will give the bulls confidence moving into Non-farm Payrolls. The breakout means there is a series of support from the lows 21,033/21,070 and corrections are a chance to buy, with support now building around 21,000. The momentum indicators remain positively configured with the RSI back into the low 60s, MACD lines strong and Stochastics holding up well. The hourly chart shows strong momentum indicators with the MACD lines turning up around neutral and the bulls are well positioned ahead of payrolls.
source-hantecfx
0 notes
marlik-job-blog · 8 years ago
Text
New Post has been published on مارلیک | اخبار و تازه های استخدامی
New Post has been published on http://job.marlik.ir/news/%d8%a7%d8%b3%d8%aa%d8%ae%d8%af%d8%a7%d9%85-%db%b1%db%b1-%d8%b1%d8%af%db%8c%d9%81-%d8%b4%d8%ba%d9%84%db%8c-%d8%af%d8%b1-%d8%b4%d8%b1%da%a9%d8%aa-%d9%be%d8%b1%d8%af%d8%a7%d8%b2%d8%b4%da%af%d8%b1%d8%a7/
استخدام ۱۱ ردیف شغلی در شرکت پردازشگران سامان
شرکت پردازشگران سامان در راستای تکمیل کادر پرسنلی خود از واجدین شرایط در ردیفهای شغلی زیر در تهران دعوت به همکاری می‌نماید.
ردیف عنوان شغل مهارت تخصصی مهارت فنی ۱  برنامه نویس جاوا
-At least 3 years of experience in solution, design and development of web-based applications using Java/J2EE, Java Servlets, JSP, EJB3, Primefaces/JSF, XML, and Struts
-Very strong understanding of the OOAD concepts and able to use it in the low level design
-Experience in developing SOAP (JAX-WS) and RESTful (JAX-RS) Web Services using Spring Web Serviceو , Apache CXF, Jersey and Axis
-Important Spring Modules like Core, Context, DAO, ORM, AO, WEB-MVC, Security …
-Strong Back End experience to develop Data Layer using at least one of the ORM frameworks like Hibernate, JPA etc
-Experience in web based application Servers, WebLogic, Websphere and Apache Tomcat
-Continuous integration (Jenkins, Maven, Gradle, Ant, Bamboo/Hudson, SVN, Git)
-Strive to constantly improve the application development processes and tools
-Expert in tuning all tiers of applications on JEE platform and experience in implementing J2EE Design Patterns for module designs
-Experience in using JMS Queue and Topic’s
-Experience with IDEs like Eclipse, JDeveloper, IntelliJ, Spring Suite etc
-Bachelor’s degree in Computer Science or Engineering or equivalent working experience
-Application of software development principles, theories, concepts, and techniques
-Provides resolutions to an assortment of technical problems
-Innovative in providing solutions, likes to take on challenges with calculated risks
-Understand Business Requirements; participates in System Requirement Analysis; design Applications based on System Requirements and Architecture, prototype if necessary, develop, unit test and deploy application. Will use MS Visio, UML, RUP, and Agile Methodology
-Hands-on experience with build and deployment tools and languages
-Good knowledge of database concepts with working knowledge on SQL, SQL Server, Oracle, DB2 or Sybase database, and Stored Procedures
-Improve and expand Test Driven Development (TDD & ATDD / BDD) Approach to development
-Strong understanding of Service Oriented Architecture (SOA) and service design concepts
۲ برنامه نویس دات نت
-C# .Net
-OOP
-MVC
-Angular
-Wcf & WebServices
-WF
-Entity Framework & LINQ
-Bachelor’s degree in Computer Science or Engineering or equivalent working experience
-Application of software development principles, theories, concepts, and techniques
-Provides resolutions to an assortment of technical problems
-Innovative in providing solutions, likes to take on challenges with calculated risks
-Understand Business Requirements; participates in System Requirement Analysis; design Applications based on System Requirements and Architecture, prototype if necessary, develop, unit test and deploy application. Will use MS Visio, UML, RUP, and Agile Methodology
-Hands-on experience with build and deployment tools and languages
-Good knowledge of database concepts with working knowledge on SQL, SQL Server, Oracle, DB2 or Sybase database, and Stored Procedures
-Improve and expand Test Driven Development (TDD & ATDD / BDD) Approach to development
-Strong understanding of Service Oriented Architecture (SOA) and service design concepts
۳ برنامه نویس پلت فرم
-C# .Net
-OOP
  -Bachelor’s degree in Computer Science or Engineering or equivalent working experience
-Application of software development principles, theories, concepts, and techniques
-Provides resolutions to an assortment of technical problems
-Good knowledge of database concepts with working knowledge on SQL, SQL Server, Oracle, DB2 or Sybase database, and Stored Procedures
-Strong understanding of Service Oriented Architecture (SOA) and service design concepts
۴ کارشناس مدیریت پروژه
-تسلط به Microsoft Project  یا Primavera
-آشنایی با Program Management و Portfolio Management
-آشنایی با SharePoint و Project Server
-آشنایی با نرم افزارهای بانکی
-داشتن سابقه مدیریت پروژه های نرم افزاری
-آشنایی با متدولوژیهای توسعه نرم افزار مانند RUP و Agile
-مدرک کارشناسی یا کارشناسی ارشد صنایع ، نرم افزار ، IT
-حداقل ۲ سال سابقه کنترل پروژه های نرم افزاری
-حداقل ۳ سال سابقه برنامه نویسی
-حداقل ۲ سال سابقه تحلیل پروژه های نرم افزاری
-داشتن سابقه کار در حوزه بانکی
-توانایی حل مسایل چالشی
    ۵ کارشناس تحلیل گر
-آشنا به  متدولوژی RUP و UML و ابزار EA
متخصص در یکی از امور زیر:
۱-تبادلات ارزی (برات، حواله، اسناد اعتباری، ضمانت نامه، حسابداری و…)
۲-خودپرداز و سوییچ (ISO8583، کارت، صندوق خودپرداز، حسابداری و …)
۳-Core Banking (تسهیلات، سپرده ها، Channel، مدرن، حسابداری، )
-حداقل ۳ سال سابقه تحلیل پروژه های نرم افزاری
-داشتن سابقه کار در حوزه بانکی
  ۶ کارشناس انباره داده
-تسلط به مفاهیم طراحی انبار داده و Olap (کیوب و ETL)
-تسلط به SQL ، PL/SQl، DAX، MDX
-متخصص ابزار های مایکروسافت (SSIS و SSAS) یا اوراکل (ODI)
-دارای حداقل ۳ سال سابقه در ware house ترجیحا بانکی ۷ کارشناس BI
-تسلط به مفاهیم Olap (کیوب و ETL)
-تسلط به SQL ، PL/SQl، DAX، MDX
-متخصص ابزار های داشبورد مانند Dashboard Oracle
-دارای حداقل ۳ سال سابقه در BI ترجیحا بانکی
  ۸ کارشناس SQL Server
-Ability to analyze database objects to identify performance improvement areas and tune database for optimal performance
-Hands on experience with maintaining DB backups, clustering, mirroring, replication and failover
-Well versed with Industry best practices for disaster recovery processes
-Experience with bulk data migrations and ability to develop migration strategies
-Process oriented, problem solving and quality focused attitude
-Troubleshooting skill set in SQL Server including, but not limited to: performance (blocking, query optimization), query failures (deadlocks, exceptions), SQL Job Agent Failures, mirroring, replication failures, and SQL log analysis
-Master’s Degree preferred (minimum Bachelor’s degree) – Preferably in Computer Engineering, Computer Science or related field
-۳+ year of experience in Database design and modelling using Microsoft SQL Server
-Excellent verbal and written communication skills, with proven technical writing abilities
-Team-oriented thinking with demonstrated ability to produce high-quality work as part of a fast-paced, dynamic team
-MCTS certification
۹ کارشناس اراکل
Experience with:
-RMAN (Recover Manager)
-RAC (Real Application Clusters)
-Partitioning
-Shell scripting
-PL/SQL
-Understanding of SQL execution plans
-Understanding of oracle awr reports
-Familiar with Oracle materialized views (snapshots in prior releases)
-Expert level knowledge and skills in various Oracle database recovery scenarios in a disaster recovery environment
-Expert knowledge and skills in master-to-master and master-to-slave replication using Oracle snapshot replication functionality
Experience with these tasks:
-Load balancing
-Data migrations
-Upgrades
-Patching
-Troubleshooting
-Bachelor’s or Master’s degree in Computer Science, Computer Engineering, or equivalent experience
-۲+ years of directly-related, hands-on work experience in a professional Oracle DBA position and Expertise in designing, building, installing, configuring, and tuning Oracle database, database upgrades and database patching
-۲+ years of relevant work experience as an Oracle Database Administrator with an in-depth knowledge of Oracle’s architecture supporting different versions of Oracle databases from 11 to 12c on Oracle LINUX.
-۲+ years of relevant work experience using the RMAN utility for database backup and recovery, database cloning, staging database refreshes and disaster recovery (DR) creations
-۲+ years of relevant work experience with Oracle database performance tuning and capacity planning
-۲+ years of relevant work experience in proactively identifying poorly executing SQL statements and PL/SQL blocks, and providing competent solutions to ensure optimal application performance
-۱+ years of relevant work experience in Oracle Data Guard/Data Broker configuration and maintenance of physical databases with fast-failover capabilities
-A deep understanding of how an Oracle database utilizes memory, CPU, and swap space in UNIX and Linux environments
-Oracle Linux operating systems experience
-Responsible in 7*24
۱۰ کارشناس لینوکس
-Ability to design/write scripts for automation
Ability to:
-Comfortable working with open-source software
-Excellent verbal and written communications skill
-Be a self-starter
-Act independently to solve problems
-Be creative and take advantage of new ideas
-Know when to escalate
-Write technical documentation
-Previous experience with enterprise container and automation tools – Docker, Puppet, Chef, HP Server Automation, HP Operations Orchestration
-Experience with LDAP – Lightweight Directory Access Protocol, DNS – Domain Name System, DHCP – Dynamic Host Configuration Protocol, and IIS – Internet Information Services
-Extensive knowledge of Unix/Linux operating systems, Unix shells and standard utilities, and common Unix/Linux security tools.
-System administration experience and knowledge of VMware and administration of virtual servers.
-Operating Systems: RedHat base LINUX
-Scripting languages: Korn, KSH, CSH, Shell Scripting on Red Hat Linux
-Grub, PXE boot, Kickstart
-SVM, LVM, Boot from SAN, UFS/ZFS, file system configuration
-General working knowledge of NAS, SAN, and networking.
-Experience with Configuration and Maintenance of Automation tools like Puppet, Blade logic, Ansible and Chef.
-LCIP 2 or LCIP 1 degree
-Bachelor’s degree, Masters preferred in computer science or related field.
-Two or more years’ experience in VMware for servers and systems installation, operations, administration, and maintenance of virtualized servers
-Responsible in 7*24
  ۱۱ کارشناس امنیت
-آشنایی با ضوابط مرکز کنترل عملیات
-آشنایی به استانداردهای تست نفوذپذیری مانند OWASP
-آشنایی با تجهیزات امنیت مانند  WAF/DAF
-آشنا با تکنولوژی های مانیتورینگ روز دنیا
-توانایی نصب و راه اندازی سرویسهای مانیتورینگ و مدیریتی
-آشنا با مباحث امنیت اطلاعات
-آشنا با پروتکلهای مدیریت شبکه
-آشنا با تجهیزات امنیتی مانند دیواره آتش، UTM، IDS/IPS
-آشنا با متدولوژیهای ارزیابی امنیتی
-آشنا با ابزارهای ارزیابی امنیتی این حوزه
-آشنا با تکنولوژیهای مختلف سیستم عامل، پایگاه داده و وب سرویس
-آشنا با آزمایشگاه امنیت
-آشنا با چارچوبهای رتبه بندی و امتیازدهی ضعفهای امنیتی
-آشنایی با نیازمندیهای امنیتی و چارچوبهای آموزشی مرتبط با امنیت
-آشنا با چارچوب جرم یابی
-آشنا به حوادث امنیتی
-آشنا با ابزارهای امنیتی مرتبط با جرم یابی و پاسخگویی امنیتی
-آشنا با فرایندهای جرم یابی
-آشنا با فرایندهای پاسخ به حوادث
-توانایی حل مسایل چالشی
-آشنا با تحلیل تهدیدات و ارایه راه حلهای کاربردی
-توانایی تهیه گزارشات فنی و مدیریتی
مهارت عمومی
آشنایی با زبان انگلیسی (مهارت شنیدن و مطالعه متون فنی و تخصصی در حد خوب، صحبت و نوشتن در حد متوسط)
تسلط بر حداقل یک متدولوژی ساخت نرم افزاری همانند RUP
توانایی ارائه مطالب
توانایی مستندسازی
توانایی کار گروهی
تفکر سیستمی
برقراری ارتباط موثر
متقاضیان می‌توانند رزومه خود را به ایمیل زیر ارسال نمایند.
آدرس ایمیل: [email protected]
  اطلاعات تماس
0 notes
ocptechnology · 3 years ago
Text
Oracle 11g R2 Dataguard configuration step by step
Oracle 11g R2 Dataguard configuration step by step
Oracle 11g R2 Dataguard configuration step by step, before starting DG configuration you must read the below note. Subscribe on YouTube Oracle Database RDBMS software installed with one database on the PRIMARY server and on STANDBY server installed only RDBMS software without any Database. PRIMARYSTANDBYIP Address: 192.168.1.10IP Address:…
Tumblr media
View On WordPress
0 notes
iyarpage · 7 years ago
Text
Validating Topic Configurations in Apache Kafka
Messages in Apache Kafka are appended to (partitions of) a topic. Topics have a partition count, a replication factor and various other configuration values. Why do those matter and what could possibly go wrong?
Why does Kafka topic configuration matter?
There are three main parts that define the configuration of a Kafka topic:
Partition count
Replication factor
Technical configuration
The partition count defines the level of parallelism of the topic. For example, a partition count of 50 means that up to 50 consumer instances in a consumer group can process messages in parallel. The replication factor specifes how many copies of a partition are held in the cluster to enable failover in case of broker failure. And in the technical configuration, one can define the cleanup policy (deletion or log compaction), flushing of data to disk, maximum message size, permitting unclean leader elections and so on. For a complete list, see http://ift.tt/2Be66Ia. Some of these properties are quite easy to change at runtime. For others this is a lot harder, though.
Let’s take the partition count. Increasing it upwards is easy – just run
bin/kafka-topics.sh --alter --zookeeper zk:2181 --topic mytopic --partitions 42
This might be sufficient for you. Or it might open the fiery gates of hell and break your application. The latter is the case if you depend on all messages for a given key landing on the same partition (to be handled by the same consumer in a group) or for example if you run a Kafka Streams application. If that application uses joins, the involved topics need to be copartitioned, meaning that they need to have the same partition count (and producers using the same partitioner, but that is hard to enforce). Even without joins, you don’t want messages with the same key end up in different KTables.
Changing the replication factor is serious business. It is not a case of simply saying “please increase the replication factor to x” as it is with the partition count. You need to completely reassign partitions to brokers, specifying the preferred leader and n replicas for each partition. It is your task to distribute those well across your cluster. This is no fun for anyone involved. Practical experience with this has actually led to this blog post. The technical configuration has an impact as well. It could be for example quite essential that a topic is using compaction instead of deletion if an application depends on that. You also might find the retention time to small or too big.
The Evils of Automatic Topic Creation
In a recent project, a central team managed the Kafka cluster. This team kept a lot of default values in the broker configuration. This is mostly sensible as Kafka comes with pretty good defaults. However, one thing they kept was auto.create.topics.enable=true. This property means that whenever a client tries to write to or read from a non-existing topic, Kafka will automatically create it. Defaults for partition count and replication factor were kept at 1.
This led to the situation where the team forgot to set up a new topic manually before running producers and consumers. Kafka created that topic with default configuration. Once this was noticed, all applications were stopped and the topic deleted – only to be created again automatically seconds later, presumably because the team didn’t find all clients. “Ok”, they thought, “let’s fix it manually”. They increased the partition count to 32, only to realize that they had to provide the complete partition assignment map to fix the replication factor. Even with tool support from Kafka Manager, this didn’t give the team members a great feeling. Luckily, this was only a development cluster, so nothing really bad happened. But it was easy to conceive that this could also happen in production as there are no safeguards.
Another danger of automatic topic creation is the sensitivity to typos. Let’s face it – sometimes we all suffer from butterfingers. Even if you took all necessary care to correctly create a topic called “parameters”, you might end up with something like
Automatic topic creating means that your producer thinks everything is fine, and you’ll scratch your head as to why your consumers don’t receive any data.
Another conceivable issue is that a developer that maybe is not yet that familiar with the Producer API might confuse the String parameters in the send method
So while our developer meant to assign a random value to the message key, he accidentally set a random topic name. Every time a message is produced, Kafka creates a new topic.
So why don’t we just switch automatic topic creation off? Well, if you can: do it. Do it now! Sadly, the team didn’t have that option. But an idea was born – what would be the easiest way to at least fail fast at application startup when something is different than expect?
How to automatically check your topic configuration
In older versions of Kafka, we basically used the code called by the kafka-topics.sh script to programmatically work with topics. To create a topic for example we looked at how to use kafka.admin.CreateTopicCommand. This was definitely better than writing straight to Zookeeper because there is no need to replicate the logic of “which ZNode goes where”, but it always felt like a hack. And of course we got a dependency on the Kafka broker in our code – definitely not great.
Kafka 0.11 implemented KIP-117, thus providing a new type of Kafka client – org.apache.kafka.clients.admin.AdminClient. This client enables users to programmatically execute admin tasks without relying on those old internal classes or even Zookeeper – all Zookeeper tasks are executed by brokers.
With AdminClient, it’s fairly easy to query the cluster for the current configuration of a topic. For example, this is the code to find out if a topic exists and what its partition count and replication factor is:
The DescribeTopicsResult contains all the info required to find out if the topic exists and how partition count and replication factor are set. It’s asynchronous, so be prepared to work with Futures to get your info.
Getting configs like cleanup.policy works similarly, but uses a different method:
Under the hood, there is the same Future-based mechanism.
A first implementation attempt
If you are in a situation where your application depends on a certain configuration for the Kafka topics you use, it might make sense to fail early when something is not right. You get instant feedback and have a chance to fix the problem. Or you might at least want to emit a warning in your log. In any case, as nice as the AdminClient is, this check is not something you should have to implement yourself in every project.
Thus, the idea for a small library was born. And since naming things is hard, it’s called “Club Topicana”.
With Club Topicana, you can check your topic configuration every time you create a Kafka Producer, Consumer or Streams client.
Expectations can be expressed programmatically or configuratively. Programmatically, it uses a builder:
This basically says “I expect the topic test_topic to exist. It should also have 32 partitions and a replication factor of 3. I also expect the cleanup policy to be delete. Kafka should retain messages for at least 30 seconds.”
Another option to specify an expected configuration is YAML (parser is included):
What do you do with those expectations? The library provides factories for all Kafka clients that mirrors their public constructors and additionally expects a collection of expected topic configurations. For example, creating a producer can look like this:
The last line throws a MismatchedTopicConfigException if the actual configuration does not meet expectations. The message of that exception lists the differences. It also provides access to the computed result so users can react to it in any way they want.
The code for consumers and streams clients looks similar. Examples are available on GitHub. If all standard clients are created using Club Topicana, an exception will prevent creation of a client and thus auto creation of a topic. Even if auto creation is disabled, it might be valuable to ensure that topics have the correct configuration.
There is also a Spring client. The @EnableClubTopicana annotation triggers Club Topicana to read YAML configuration and execute the checks. You can configure if you want to just log any mismatches or if you want to let the creation of the application context fail.
This is all on GitHub and available on Maven Central.
Caveats
Club Topicana will not notice when someone changes the configuration of a topicafter your application has successfully started. It also of course cannot guard against other clients doing whatever on Kafka.
Summary
The configuration of your Kafka topics is an essential part of running your Kafka applications. Wrong partition count? You might not get the parallelism you need or your streams application might not even start. Wrong replication factor? Data loss is a real possibility. Wrong cleanup policy? You might lose messages that you depend on later. Sometimes, your topics might be auto-generated and come with bad defaults that you have to fix manually. With the AdminClient introduced in Kafka 0.11, it’s simple to write a library that compares actual and desired topic configurations at application startup.
The post Validating Topic Configurations in Apache Kafka appeared first on codecentric AG Blog.
Validating Topic Configurations in Apache Kafka published first on http://ift.tt/2fA8nUr
0 notes