#What is the purpose of fine-grained password policy?
Explore tagged Tumblr posts
Text
#Fine-Grained Password Policies: Elevating Security Standards in the Digital Age#What is the purpose of fine-grained password policy?#What is password policy in cyber security?
0 notes
Text
Why You Need to Look Beyond Kafka for Operational Use Cases, Part 3: Securing the Eventing Platform
Preface
Over the last few years Apache Kafka has taken the world of analytics by storm, because it’s very good at its intended purpose of aggregating massive amounts of log data and streaming it to analytics engines and big data repositories. This popularity has led many developers to also use Kafka for operational use cases which have very different characteristics, and for which other tools are better suited.
This is the third in a series of four blog posts in which I’ll explain the ways in which an advanced event broker, specifically PubSub+, is better for use cases that involve the systems that run your business. The first was about the need for filtration and in-order delivery, the second was about the importance of flexible filtering, and this one is obviously focused on securing Apache Kafka. My goal is not to dissuade you from using Kafka for the analytics uses cases it was designed for, but to help you understand the ways in which PubSub+ is a better fit for operational use cases that require the flexible, robust and secure distribution of events across cloud, on-premises and IoT environments.
Summary
Architecture
A Kafka-based eventing solution can consist of several components beyond the Kafka brokers: a metadata store like Zookeeper or MDS, Connect Framework, Protocol proxies, and replication clusters like MirrorMaker or Replicator. Each of these components has its own security model and capabilities, and some are way more mature and robust than others. This mix of components with their own security features and configuration introduces the “weak link in the chain” problem and makes it harder than it should be to centrally manage the security of your system. It’s so hard to secure Kafka that securing Apache Kafka installations is a big part of the business models of Kafka providers like Confluent, AWS, Redhat, Azure, IBM and smaller players – each one with a unique approach and set of tools.
Solace handles everything in the broker, with a centralized management model that eliminates these problems and makes it easy to ensure the security of your system across public clouds, private clouds and on-premises installations – which is critical for the operational systems that run your business.
Performance
The centerpiece of the Kafka architecture is a simple topic for publish and subscribe, and Kafka derives a lot of its performance and scalability from this simplistic view of data. This causes challenges in implementing the security features it takes to ensure that a user is authenticated and authorized to receive only the data they should be able to. Kafka security features are too coarse, and too complicated to implement due to the many components in a Kafka system, which in some cases are not very secure.
Solace has a full set of embedded, well integrated, security features that impose a minimal impact on performance and operational complexity.
Connection Points
Beyond the Apache Kafka security concerns mentioned above, different types of users, (native Kafka, REST, MQTT, JMS), makes the task of securing Apache Kafka-based systems immensely more complicated. Each type of user has different connection points (proxies, broker) to the eventing system, every connection point has different security features and it is the task of the operator to coordinate authentication and authorization policies between these entry points where possible. Adding new user types to the cluster is a complicated task that likely introduces new licensed products and greatly affects operational tasks.
Solace has a single-entry point to a cohesive event mesh for all user types (of varying protocols and APIs) and applies consistent authentication and authorization policies to all users without changing product costs or greatly changing anything from an operational point of view.
Simple Example of Securing Apache Kafka
Adding Security to Eventing Systems
In the first article in this series, I discussed an event-based infrastructure used to process orders for an online store. In the scenario, you are asked to imagine that you’ve been tasked with splitting the infrastructure across your datacenter and a public cloud, and now must ensure that all interactions happen securely. This would involve:
Authenticating all users (administrative users and applications), likely integrated with existing enterprise authentication mechanisms.
Authorizing authenticated users to produce and consume specific data using access control lists, (ACL), again likely integrated into existing enterprise policy groups.
Securing all data movement, likely with TLS on all links.
Securing configurations and monitoring any changes to the system that might compromise security.
At a component level, this would look like this:
Ability to authenticate all connections, including internal system connections, back end services and remote front-end client. Have the Event System validate authentication with a centralized enterprise service to reduce administrative complexity. Note here that in this simple diagram the Realtime Event System could contain things like Firewalls, or API Gateways or other components seen as necessary to securely terminate the front-end client connections.
Ability to enforce fine grain authorization policies which are centrally managed.
Ability to distribute and revoke security certificates from a centralized service.
Ability to push configuration change notifications to a centralized audit service, looking for introduction of vulnerabilities. Changes should be role based scoped to allow different levels of administration
Securing Apache Kafka-based Event Systems
If you look at how to overlay security on a system built with Apache Kafka/Confluent, you can see there are several required components. Each component needs to be addressed when considering an enterprise security posture as you will see things like security principles and certificate validation are disseminated across these components. In the diagram below I show the basic components that make up a Kakfa eventing system and add lines for interactions within the system or with enterprise security services.
The green lines shows components or interactions that are fairly easy to secure, and to manage in a secure way. The red lines show areas where security is either lacking in functionality, complicated, or simply left to administrators to come up with their own solution.
1. Enterprise Authentication: Supported in some places and not in others.
Apache Kafka broker supports un-managed, (see #4 below) JAAS file-based authentication in SSL, SASL/PLAIN and SCRAM. It also supports more enterprise solutions including Kerberos and OAuth2.
If you use SASL/PLAIN instead of an enterprise authentication solution, you need to restart the brokers every time you add or delete a user.
If you use SCRAM instead of an enterprise authentication solution, usernames and passwords will be stored in Zookeeper which causes several security concerns, (see 11 below).
Confluent broker extends the authentication with LDAP, but not all components support this. For example, Zookeeper only supports Kerberos and MD5-Digest.
The Confluent REST Proxy does not integrate with standards based OAuth2 OpenIDConnect instead relies on Client/Server certificates with no certificate management, (see 3 below). Ref:
2. Enterprise Authorization: Supported is some releases and not in others.
Apache Kafka SimpleAclAuthorizer stores ACLs in zookeeper with no integration with enterprise policy services.
Confluent does sell an LDAP authorizer plugin for the Kafka broker that supports LDAP group based mapping of policies.
3. Enterprise Certificate Management: Not supported.
Both Apache Kafka and Confluent brokers uses HTTPS/TLS authentication, but Kafka does not conveniently support blocking authentication for individual brokers or clients that were previously trusted. This means there is no way to life cycle manage certificates through revocation.
4. Enterprise Change Management: Not supported
Configurations and JAAS files are spread throughout the solution on brokers, proxies and zookeepers but there is no native Kafka solution to do change tracking of these files. It is up to the administration team to build out a solution for this concern. This file-based configuration also means it is not possible to have RBAC rules around what parts of the configuration you are allowed to administer. Confluent Control Center does help here in that it will allow some level of configuration validation, but there is no ability to back and restore of configurations. Also, it does not provide complete authenticated and authorized interface to control configuration changes and instead points you to un-authenticated command line tools to perform these tasks as “There are still a number of useful operations that are not automated and have to be triggered using one of the tools that ship with Kafka”..
5. Services run as standalone apps or plugins within the Kafka Connect Framework.
Applications can be written to work securely as long as they do not take advantage of data-filtering patterns as described here. If they use this pattern, then centralized authorization policies will be defeated, and you will not be able to tell where the data has been disseminated.
6. REST Clients: Can make request, but no Events of Notifications
The Confluent REST proxy clients can create events such as a new purchase order, but can not receive per client notifications even with polling. This is because filtering is not fine-grained enough to ensure that each consumer receives only its data and not data meant for other consumers. So even if a back-end service receives an event from a REST request, there is no natural way to assign a reply-to topic address that only the correct client can consume. The application development team will have to build or buy another solution to allow REST consumers to interact with the online store securely ensuring interopability with enterprise authentication, authorization, auditing and certificate management systems.
7. REST Proxy: Single Kafka user and loss of granularity in the request
By default, the Confluent REST proxy connects to the Kafka broker as a single user. This means that all requests into the REST proxy are seen as though they came from a single user to the Kafka Broker, and a single ACL policy is applied to all requests regardless of the actual originator. It is possible to purchase a security plugin that extracts the principal from each request and maps it to the principal of the message being sent to Kafka. This principal mapping would allow per user ACLs to be applied by the Kafka broker, but it does mean authenticating twice and authorization once on every REST request. You will need to keep configurations in sync across proxies and brokers for this to work and it does add extra overhead since each publish request requires a connection setup, TLS handshake and authentication to the Kafka broker.
Each REST request URL path is mapped to a Kafka topic based on a matching regex pattern held in a local file. This reduces functionality and strips away ability to implement fine grain controls.
For these reasons most people build or buy a proper REST termination interface instead of using the REST proxy.
8. Kafka Connect: Connect Cluster REST interface exposes a new attack vector
The producer and consumer interfaces for the embedded plugins and Kafka broker connections pose no additional security risks, but the Connect Cluster REST interface exposes the inner workings of the Connect framework such that someone could glean sensitive information about other plugins and application credentials. This REST interface cannot be secured via HTTPS or Kerberos, and there is no RBAC-like role policies on what endpoints can be accessed once a REST connection is made to the Connect Cluster.
9. Zookeeper: Security issues
Zookeepers are not secure, and it’s suggested that system administrators make them secure by placing them in a secure network. “The default implementation of SASL/SCRAM in Kafka stores SCRAM credentials in Zookeeper. This is suitable for production use in installations where Zookeeper is secure and on a private network.”
This is because the version of Zookeeper bundled with Kafka does not support SSL, and some of the protocols used between the Kakfa broker and the Zookeepers need SSL to prevent hacking.
This does not look great, but there are several paths forward. Zookeeper is becoming more secure in version 3.5. Confluent is offering MDS as a Zookeeper replacement and there is a KIP to move metadata store onto the Kafka broker. It is just yet to be seen which of these paths is the best one going forward, and if any really improve the security situation.
10. Kafka Broker: Performance degradation with TLS enabled
The key to Kafka performance is the broker’s ability to forward events on simple topics in kernel space. Enabling TLS disables this ability so Kafka can’t deliver events directly from kernel write cache, which means it spend way more CPU cycles processing every event.
Securing Apache Kafka entails using the Java Crypto Architecture (JCA) for encryption and decryption, which means encryption behavior and performance will vary based on the underlying JDK. This makes it difficult to know exactly what to expect for performance degradation when enabling encryption, but most tests show about 50-60% increase in broker CPU utilization for the same message rates and size, and some users have seen degradation as high as 80%.
Securing an Event Mesh Built with PubSub+
When you compare securing Apache Kafka to overlaying security on an event mesh built on top of Solace PubSub+, we see the beauty of an integrated solution that has been designed from day one to be the backbone of an enterprise eventing system. The Solace broker supports clients using multiple open protocols like REST, MQTT, AMQP, as well as a Solace protocol and JMS. Each of these clients are authenticated and authorized the same way, with the same interactions with enterprise authentication and authorization tools. The configuration and policies are all stored in an internal database where changes can be controlled through RBAC policies and changes are traceable via logging infrastructure.
1. Enterprise Authentication: Consistent integration with enterprise authentication services
All protocols, including those on the management interface, uniformly support mutual certificate authentication, Kerberos, Radius and LDAP. Integration into enterprise authentication services is easy and secure, the configuration options are comprehensive, and configuration is possible through our WebUI. MQTT, Mobile and Web also support Oauth2 OpenIdConnect. From an authentication perspective it does not matter which protocol you use to connect to the Solace broker to produce and consume events, the producers and consumers are authenticated in a consistent manner which ensures security and is easy to administer.
2. Enterprise Authorization: Consistent integration with enterprise authorization services
Administrative users are authorized against RBAC roles-based policies, and these users can have their roles set via enterprise authorization groups. This shows the biggest difference between what Kafka and Solace mean by multi-tenant. When the concepts for securing Apache Kafka were introduced in Kafka 0.9, they then called it multi-tenant, but it is really multi-user. With Solace an administrator can be given administrative permissions to manage a single tenant application domain (called a VPN), and in that domain they can create named users, queues, profiles and ACLs without worrying about conflict with other administrators in other domains. All objects within their application domain are namespace separated, much like being in your own docker container. With Kafka there is no application domain level to administer. All topics and users live in a single flat namespace.
On the producer and consumer side, authorization ACL policies can be managed via enterprise authorization groups, meaning individual producers and consumers can be added or removed to policy groups which affects which ACLs are applied. The ACL rules themselves, and the application of ACLs to an application, are uniform and independent of the protocol the application uses to connect with the broker.
3. Enterprise Certificate Management: Able to control lifecycle of certificates
Solace supports TLS with mutual authentication for all protocols with strong binding of Common Name (CN) to security principle for authorization. Solace supports proper life cycle management of certificates with ability to revoke certificates in place.
4. Enterprise Change Management: Able to push notify configuration changes as well as secure config
Solace configuration data is stored in an internal database that can be manually or automatically backed up periodically, exported for safe keeping, and restored. Configuration change notifications can by pushed to a change management system and are locally journaled for verification. Change logs include who made the change, when they made it, and where they connected from.
5. Application connections: Securely connect and send/receive data
Solace allows the integration of authentication, authorization, accounting, and certificate management integration so applications can securely connect send and receive the data they are entitled to produce and consume. Fine-grained filtering and ACLs allow for strict governance of the flow of data.
6. REST/MQTT application connections: Securely connect and send/receive data
The description written for back end application (see 5 above) also applies here because Solace’s broker natively supports open protocols support, so it doesn’t rely on an external proxy. Solace’s broker was designed to directly authenticate and authorize hundreds of thousands of connections, allowing a consistent application of these policies across all connection protocols. Solace is able to apply ACLs directly to MQTT topics and subscriptions and also apply dynamic subscriptions to allow per-user topics so a single customer can only produce and consume data related to themselves. The brokers also include advanced ACL capabilities, called substitution variables, to prevent applications from generating events that impersonate other users by validating the application’s authentication name within topics produced or consumed. Similar capabilities exist on the consumer side.
7. Solace Broker: TLS has smaller impact on performance.
Enabling TLS will have some performance impact, but Solace has minimized this by using highly optimized natively compiled C libraries.
Security Documentation
I’ve talked here at a fairly high level for the sake of brevity. If you want more information about any of the capabilities I’ve touched on, I suggest you reference the actual vendor documentation:
Apache Kafka security feature overview
Confluent security features overview
Solace security feature overview
Conclusion
For operational use cases that run your business, it is essential that you can trust the data has not been corrupted or tampered with, also that the data has not been observed by un-authorized users.
Securing Apache Kafka is difficult because its lightweight broker architecture doesn’t provide everything it takes to implement a complete event system, so you must surround the broker with complementary components like Zookeeper, proxies, data-filtering applications, etc. If you are using Kafka for analytics-centric use cases, the security models around these components might be enough, but for operational use cases, these components with their varying ability to be integrated with enterprise security service will probably not meet your security requirements.
Solace event brokers were designed to be multi-user, and have evolved to be truly multi-tenant and multi-protocol. This has provided a platform for integrating enterprise security services in a consistent way that lets you more easily and completely secure your system and the information the flows through it.
Specifically:
Securing Apache Kafka is complex because of the multiple components needed to build out a complete solution. Solace makes securing an eventing solution easy across all channels, (web, mobile and IoT clients as well as back end services) and for all users – applications and administrators.
Kafka does not provide the level or fine grain authorization needed in purchase order type and other operational systems and another set of 3rd-party tools will need to be added to make a complete solution. Solace fine grained ACLs allow simplified per-client authentication and authorization.
Kafka is not optimized for encryption and in some links, encryption is not supported. Solace is optimized for end-to-end encrypted connections.
In the next post in this series, I’ll explain the importance of replication in operational use cases.
The post Why You Need to Look Beyond Kafka for Operational Use Cases, Part 3: Securing the Eventing Platform appeared first on Solace.
Why You Need to Look Beyond Kafka for Operational Use Cases, Part 3: Securing the Eventing Platform published first on https://jiohow.tumblr.com/
0 notes
Text
GDPR and Pianola
Most clubs in the EU will already have started planning for the forthcoming data protection legislation - GDPR - that comes into force on 25 May. This post explains how GDPR affects your use of Pianola (in short - it doesn’t!).
GDPR tightens up the existing regulations in relation to protection of private data and introduces new penalties for breaches and loss of data.
Pianola’s features have always been designed with respect for the rights of individuals to control their personal information. We have undertaken a major review of the system to see what changes we need to make in relation to GDPR. The good news is that there are very few changes required in Pianola to make it compliant.
As a club, you are (and always have been) the data controller. When you use a system like Pianola, we act as your data processor. Here’s how Pianola will help you to fulfil your obligations under GDPR.
Encryption According to the EBU's excellent guidance:
"If you keep your records on a computer, they should be ... locked and/or encrypted."
Pianola is hosted on a secure website, which means that all data is encrypted as it’s transmitted across the internet. We have always encrypted users’ passwords in our database but for GDPR, we are extending our encryption to cover all elements of users’ personal data.
Read more about encryption on the Information Commissioner’s Office (ICO) website.
Access control As per the EBU’s guidance:
“If you keep your records on a computer, they should only be accessible by appropriate people… Only committee members or club managers, if relevant, should have access to members’ records.”
Pianola allows you to give users different levels of access, according to their role within the club. Using Pianola’s roles and permissions means you can have fine-grained control over who can access your members’ personal data. We are tightening up these permissions to provide a stricter better level of access; after May 25th, people with the “secretary” role will no longer have access to members’ personal data.
Read more about Pianola’s roles and permissions.
Members’ preferences The EBU’s guidance states:
“Clubs should not issue lists of members' contact details (telephone number and email address) to all their members. Any such list that is made available should only contain the details of members who have specifically agreed to this. Any clubs that currently publish such a list should contact all members on it to ask whether they wish to remain on the list. They should be asked to “opt in” to this - it is not permissible for the default to be to include them unless they opt out.”
By default, members’ personal information is not visible to other members of the club. It requires a positive “opt-in” action on behalf of each member to make their information visible to other members. This has always been the case, since we launched Pianola in 2011. Members can choose to share some, none or all of their contact details; phone number(s), email address, postal address.
User login details Pianola requires every user to choose their own secret password. There is no need to share an admin password. This means that you do not have to worry about changing passwords whenever there’s a change in the committee. You simply have to update the roles and permissions of anyone who no longer needs access to the data. Likewise, when a player leaves the club you can remove their access to Pianola completely.
Data exports Although it’s possible to download a copy of your player records from Pianola, we recommend you don’t do this as it means you have the headache of protecting that file on your ‘local’ computer.
However, one very useful feature of Pianola is the ability to export your player records to EBUScore / Scorebridge. Previously, we have included all of your members’ personal data in these export files (address, phone, email, etc). From May 25th, we will remove this data from the export files. They will only contain the bare minimum required for your scoring program to work: name, national bridge organisation number, club number, membership status, EBU rank and NGS grade (where available).
Emails As per the EBU’s guidance:
“Emails should not be sent to groups of people in a way that makes their email addresses visible. To avoid this, either use a mailshot program or blind copy (bcc) all the recipients.”
Pianola has always operated in this way. When you send emails via Pianola, each recipient receives an individual, personalised message. The email addresses of other recipients are not visible - and never have been.
Backups The EBU recommends:
“Do not keep data in more places than necessary – not only does this weaken your security, it also increases the possibility that the data will get out of sync and will not be consistent in different places. It is however sensible to have a backup of your data providing that you have a system to ensure it is backed up regularly and kept in a secure place.”
Pianola means you do not need to keep extra copies of your members’ data. We backup your data every night, so there is no need to keep a local backup yourself (although you can download a copy of your database if you wish).
Privacy notice The EBU recommends you publish a privacy policy and make it easily available to members. Pianola will allow you insert a link to this policy into the footer of every email you send.
Anonymous EBU members We are adding an option to mark a member of your club as anonymous on the EBU. This will replace their real name with a pseudonym. (Anyone who chooses to be anonymous will not be able to access any of the membership benefits of the EBU, of course.)
Players can also choose to make their NGS grade and / or EBU rank private, from within the members area of ebu.co.uk. When they do this, Pianola will automatically remove that data from their Pianola record within 24 hours.
Right to be forgotten One of the provisions of GDPR is the right to be forgotten, when the club no longer has a legal purpose for holding an individual’s data (eg when they have left the club). We are adding a button to wipe all trace of personal data from an individual’s record - but use this with caution as it will be irreversible!
Updating our contract with you GDPR requires data processors to have a written contract with any data processor they use. We will be issuing this paperwork in due course, ahead of 25 May.
Questions Please contact us if you have any questions about GDPR and how it will be supported by Pianola. You can reach me by email ([email protected]) or phone (0113 320 1352).
0 notes