#data compliance
Explore tagged Tumblr posts
Text
0 notes
Text
Scaling Your Data Mesh Architecture for maximum efficiency and interoperability
View On WordPress
#Azure Databricks#Big Data#Business Intelligence#Cloud Data Management#Collaborative Data Solutions#Data Analytics#Data Architecture#Data Compliance#Data Governance#Data management#Data Mesh#Data Operations#Data Security#Data Sharing Protocols#Databricks Lakehouse#Delta Sharing#Interoperability#Open Protocol#Real-time Data Sharing#Scalable Data Solutions
0 notes
Text
0 notes
Text
A Full Guide to Fine-Tuning Large Language Models
New Post has been published on https://thedigitalinsider.com/a-full-guide-to-fine-tuning-large-language-models/
A Full Guide to Fine-Tuning Large Language Models
Large language models (LLMs) like GPT-4, LaMDA, PaLM, and others have taken the world by storm with their remarkable ability to understand and generate human-like text on a vast range of topics. These models are pre-trained on massive datasets comprising billions of words from the internet, books, and other sources.
This pre-training phase imbues the models with extensive general knowledge about language, topics, reasoning abilities, and even certain biases present in the training data. However, despite their incredible breadth, these pre-trained LLMs lack specialized expertise for specific domains or tasks.
This is where fine-tuning comes in – the process of adapting a pre-trained LLM to excel at a particular application or use-case. By further training the model on a smaller, task-specific dataset, we can tune its capabilities to align with the nuances and requirements of that domain.
Fine-tuning is analogous to transferring the wide-ranging knowledge of a highly educated generalist to craft an subject matter expert specialized in a certain field. In this guide, we’ll explore the whats, whys, and hows of fine-tuning LLMs.
Fine-tuning Large Language Models
What is Fine-Tuning?
At its core, fine-tuning involves taking a large pre-trained model and updating its parameters using a second training phase on a dataset tailored to your target task or domain. This allows the model to learn and internalize the nuances, patterns, and objectives specific to that narrower area.
While pre-training captures broad language understanding from a huge and diverse text corpus, fine-tuning specializes that general competency. It’s akin to taking a Renaissance man and molding them into an industry expert.
The pre-trained model’s weights, which encode its general knowledge, are used as the starting point or initialization for the fine-tuning process. The model is then trained further, but this time on examples directly relevant to the end application.
By exposing the model to this specialized data distribution and tuning the model parameters accordingly, we make the LLM more accurate and effective for the target use case, while still benefiting from the broad pre-trained capabilities as a foundation.
Why Fine-Tune LLMs?
There are several key reasons why you may want to fine-tune a large language model:
Domain Customization: Every field, from legal to medicine to software engineering, has its own nuanced language conventions, jargon, and contexts. Fine-tuning allows you to customize a general model to understand and produce text tailored to the specific domain.
Task Specialization: LLMs can be fine-tuned for various natural language processing tasks like text summarization, machine translation, question answering and so on. This specialization boosts performance on the target task.
Data Compliance: Highly regulated industries like healthcare and finance have strict data privacy requirements. Fine-tuning allows training LLMs on proprietary organizational data while protecting sensitive information.
Limited Labeled Data: Obtaining large labeled datasets for training models from scratch can be challenging. Fine-tuning allows achieving strong task performance from limited supervised examples by leveraging the pre-trained model’s capabilities.
Model Updating: As new data becomes available over time in a domain, you can fine-tune models further to incorporate the latest knowledge and capabilities.
Mitigating Biases: LLMs can pick up societal biases from broad pre-training data. Fine-tuning on curated datasets can help reduce and correct these undesirable biases.
In essence, fine-tuning bridges the gap between a general, broad model and the focused requirements of a specialized application. It enhances the accuracy, safety, and relevance of model outputs for targeted use cases.
Fine-tuning Large Language Models
Fine-Tuning Approaches
There are two primary strategies when it comes to fine-tuning large language models:
1) Full Model Fine-tuning
In the full fine-tuning approach, all the parameters (weights and biases) of the pre-trained model are updated during the second training phase. The model is exposed to the task-specific labeled dataset, and the standard training process optimizes the entire model for that data distribution.
This allows the model to make more comprehensive adjustments and adapt holistically to the target task or domain. However, full fine-tuning has some downsides:
It requires significant computational resources and time to train, similar to the pre-training phase.
The storage requirements are high, as you need to maintain a separate fine-tuned copy of the model for each task.
There is a risk of “catastrophic forgetting”, where fine-tuning causes the model to lose some general capabilities learned during pre-training.
Despite these limitations, full fine-tuning remains a powerful and widely used technique when resources permit and the target task diverges significantly from general language.
2) Efficient Fine-Tuning Methods
To overcome the computational challenges of full fine-tuning, researchers have developed efficient strategies that only update a small subset of the model’s parameters during fine-tuning. These parametrically efficient techniques strike a balance between specialization and reducing resource requirements.
Some popular efficient fine-tuning methods include:
Prefix-Tuning: Here, a small number of task-specific vectors or “prefixes” are introduced and trained to condition the pre-trained model’s attention for the target task. Only these prefixes are updated during fine-tuning.
LoRA (Low-Rank Adaptation): LoRA injects trainable low-rank matrices into each layer of the pre-trained model during fine-tuning. These small rank adjustments help specialize the model with far fewer trainable parameters than full fine-tuning.
Sure, I can provide a detailed explanation of LoRA (Low-Rank Adaptation) along with the mathematical formulation and code examples. LoRA is a popular parameter-efficient fine-tuning (PEFT) technique that has gained significant traction in the field of large language model (LLM) adaptation.
What is LoRA?
LoRA is a fine-tuning method that introduces a small number of trainable parameters to the pre-trained LLM, allowing for efficient adaptation to downstream tasks while preserving the majority of the original model’s knowledge. Instead of fine-tuning all the parameters of the LLM, LoRA injects task-specific low-rank matrices into the model’s layers, enabling significant computational and memory savings during the fine-tuning process.
Mathematical Formulation
LoRA (Low-Rank Adaptation) is a fine-tuning method for large language models (LLMs) that introduces a low-rank update to the weight matrices. For a weight matrix 0∈W0∈Rd×k, LoRA adds a low-rank matrix BA, with A∈Rr×k and B∈Rd×r, where r is the rank. This approach significantly reduces the number of trainable parameters, enabling efficient adaptation to downstream tasks with minimal computational resources. The updated weight matrix is given by W=W0+B⋅A.
This low-rank update can be interpreted as modifying the original weight matrix $W_0$ by adding a low-rank matrix $BA$. The key advantage of this formulation is that instead of updating all $d times k$ parameters in $W_0$, LoRA only needs to optimize $r times (d + k)$ parameters in $A$ and $B$, significantly reducing the number of trainable parameters.
Here’s an example in Python using the peft library to apply LoRA to a pre-trained LLM for text classification:
In this example, we load a pre-trained BERT model for sequence classification and define a LoRA configuration. The r parameter specifies the rank of the low-rank update, and lora_alpha is a scaling factor for the update. The target_modules parameter indicates which layers of the model should receive the low-rank updates. After creating the LoRA-enabled model, we can proceed with the fine-tuning process using the standard training procedure.
Adapter Layers: Similar to LoRA, but instead of low-rank updates, thin “adapter” layers are inserted within each transformer block of the pre-trained model. Only the parameters of these few new compact layers are trained.
Prompt Tuning: This approach keeps the pre-trained model frozen completely. Instead, trainable “prompt” embeddings are introduced as input to activate the model’s pre-trained knowledge for the target task.
These efficient methods can provide up to 100x compute reductions compared to full fine-tuning, while still achieving competitive performance on many tasks. They also reduce storage needs by avoiding full model duplication.
However, their performance may lag behind full fine-tuning for tasks that are vastly different from general language or require more holistic specialization.
The Fine-Tuning Process
Regardless of the fine-tuning strategy, the overall process for specializing an LLM follows a general framework:
Dataset Preparation: You’ll need to obtain or create a labeled dataset that maps inputs (prompts) to desired outputs for your target task. For text generation tasks like summarization, this would be input text to summarized output pairs.
Dataset Splitting: Following best practices, split your labeled dataset into train, validation, and test sets. This separates data for model training, hyperparameter tuning, and final evaluation.
Hyperparameter Tuning: Parameters like learning rate, batch size, and training schedule need to be tuned for the most effective fine-tuning on your data. This usually involves a small validation set.
Model Training: Using the tuned hyperparameters, run the fine-tuning optimization process on the full training set until the model’s performance on the validation set stops improving (early stopping).
Evaluation: Assess the fine-tuned model’s performance on the held-out test set, ideally comprising real-world examples for the target use case, to estimate real-world efficacy.
Deployment and Monitoring: Once satisfactory, the fine-tuned model can be deployed for inference on new inputs. It’s crucial to monitor its performance and accuracy over time for concept drift.
While this outlines the overall process, many nuances can impact fine-tuning success for a particular LLM or task. Strategies like curriculum learning, multi-task fine-tuning, and few-shot prompting can further boost performance.
Additionally, efficient fine-tuning methods involve extra considerations. For example, LoRA requires techniques like conditioning the pre-trained model outputs through a combining layer. Prompt tuning needs carefully designed prompts to activate the right behaviors.
Advanced Fine-Tuning: Incorporating Human Feedback
While standard supervised fine-tuning using labeled datasets is effective, an exciting frontier is training LLMs directly using human preferences and feedback. This human-in-the-loop approach leverages techniques from reinforcement learning:
PPO (Proximal Policy Optimization): Here, the LLM is treated as a reinforcement learning agent, with its outputs being “actions”. A reward model is trained to predict human ratings or quality scores for these outputs. PPO then optimizes the LLM to generate outputs maximizing the reward model’s scores.
RLHF (Reinforcement Learning from Human Feedback): This extends PPO by directly incorporating human feedback into the learning process. Instead of a fixed reward model, the rewards come from iterative human evaluations on the LLM’s outputs during fine-tuning.
While computationally intensive, these methods allow molding LLM behavior more precisely based on desired characteristics evaluated by humans, beyond what can be captured in a static dataset.
Companies like Anthropic used RLHF to imbue their language models like Claude with improved truthfulness, ethics, and safety awareness beyond just task competence.
Potential Risks and Limitations
While immensely powerful, fine-tuning LLMs is not without risks that must be carefully managed:
Bias Amplification: If the fine-tuning data contains societal biases around gender, race, age, or other attributes, the model can amplify these undesirable biases. Curating representative and de-biased datasets is crucial.
Factual Drift: Even after fine-tuning on high-quality data, language models can “hallucinate” incorrect facts or outputs inconsistent with the training examples over longer conversations or prompts. Fact retrieval methods may be needed.
Scalability Challenges: Full fine-tuning of huge models like GPT-3 requires immense compute resources that may be infeasible for many organizations. Efficient fine-tuning partially mitigates this but has trade-offs.
Catastrophic Forgetting: During full fine-tuning, models can experience catastrophic forgetting, where they lose some general capabilities learned during pre-training. Multi-task learning may be needed.
IP and Privacy Risks: Proprietary data used for fine-tuning can leak into publicly released language model outputs, posing risks. Differential privacy and information hazard mitigation techniques are active areas of research.
Overall, while exceptionally useful, fine-tuning is a nuanced process requiring care around data quality, identity considerations, mitigating risks, and balancing performance-efficiency trade-offs based on use case requirements.
The Future: Language Model Customization At Scale
Looking ahead, advancements in fine-tuning and model adaptation techniques will be crucial for unlocking the full potential of large language models across diverse applications and domains.
More efficient methods enabling fine-tuning even larger models like PaLM with constrained resources could democratize access. Automating dataset creation pipelines and prompt engineering could streamline specialization.
Self-supervised techniques to fine-tune from raw data without labels may open up new frontiers. And compositional approaches to combine fine-tuned sub-models trained on different tasks or data could allow constructing highly tailored models on-demand.
Ultimately, as LLMs become more ubiquitous, the ability to customize and specialize them seamlessly for every conceivable use case will be critical. Fine-tuning and related model adaptation strategies are pivotal steps in realizing the vision of large language models as flexible, safe, and powerful AI assistants augmenting human capabilities across every domain and endeavor.
#ai#anthropic#applications#approach#Artificial Intelligence#attention#awareness#Behavior#BERT#Bias#Books#claude#code#Companies#compliance#comprehensive#craft#data#data compliance#data privacy#data quality#datasets#deployment#domains#Drift#efficiency#embeddings#engineering#Ethics#excel
0 notes
Text
4 Biggest Risks of Non-Compliance With Data Privacy Regulations
Not complying with data privacy laws can jeopardize an organization’s cybersecurity, finances, reputation, and more.
Data Compliance is non-negotiable. Here are the risks of non-compliance
On Dec. 9, 2022, an important data privacy compliance deadline will pass for organizations that process U.S. consumers’ financial data. Under the Gramm-Leach-Bliley Act (GLBA)’s updated Safeguards Rule, organizations have until that day to comply with a series of data security practices that the landmark U.S. financial data privacy law has mandated, which include:
Releasing periodic reports to boards of directors and governing bodies
Instituting secure software development practices
Identifying and managing data based on risk
Implementing and reviewing data access controls
Encrypting data both in transit and at rest
Establishing secure procedures for disposing data
The GLBA is just one of many regulations across the globe with substantial privacy protections— a group that includes the European Union’s General Data Protection Regulation (GDPR), Brazil’s General Data Protection Law (LGPD), and the United States’ Health Insurance Portability and Accountability Act (HIPAA). These laws mandate that organizations within their jurisdiction employ effective data privacy practices to protect personally identifiable information (PII), or what the Safeguards Rule calls non-public personal information (NPI).
The stakes for data privacy compliance are high, as non-compliance can result in devastating consequences. Organizations can expect to face four major risks for non-compliance with data privacy laws: inadequate cybersecurity, expensive fines, high individual penalties, and reputational damage.
The Compliance (Cyber)Security Blanket
Compliance and data security go hand-in-hand— if an organization’s systems don’t comply with data privacy standards, there’s a good chance that its data security could be lacking. After all, a major component of data privacy on your compliance is ensuring that consumers’ data is safe from the hands of bad actors who could use the data nefariously.
For instance, the GLBA requires financial institutions to, “protect against any reasonably anticipated threats or hazards” as well as “unauthorized access to, or use of,” customers’ data. The Federal Financial Institutions Examination Council, which audits financial institutions, dictates that these institutions should use strong encryption and key management practices.
These practices, of course, improve compliance and security alike. So, even if your organization doesn’t get slapped with a data privacy lawsuit, non-compliance with data privacy regulations reflects poor data controls, a significant liability for your organization.
Non-Compliance Fines Aren’t Fine
If your organization violates a data privacy law, a punitive fine is almost guaranteed. The severity of fines cover a wide spectrum. For instance, every GLBA violation carries a fine of up to $100,000, while LGPD infractions carry a financial penalty of up to 2 percent of the sanctioned organization’s gross revenue, with a maximum fine of 50 million Brazilian Reals (about $9.7 million).
GDPR fines are even more grave— every GDPR violation can cost up to 4 percent of a company’s annual global revenues or €20 million (about $22 million)— whichever is highest. So far, Amazon Europe Core S.a.r.l. incurred the largest-ever GDPR fine when the Luxembourg National Commission for Data Protection levied a whopping $746 million fine on the technology giant for infringements related to Amazon’s advertising targeting system. Even moderate GDPR fines can exceed $10 million.
The Big House Awaits
When a data privacy law is broken, the organization may not be the only party held liable (if at all). While it varies case-by-case, employees within an infringing organization are sometimes penalized, as well.
These penalties can include individual fines and jail time— and like fines, they encompass a wide spectrum of severity. For instance, one individual who violated the U.K.’s Data Protection Act (DPA) in 2018 by stealing and selling customer records to rogue organizations incurred a 6- month prison sentence. Individual penalties under GLBA, meanwhile, are much higher—each violation of the Act can result in fines of up to $10,000 for directors and officers, license revocations, and up to five years of imprisonment.
The new Safeguards Rule requires covered entities to report annually to their boards of directors, effectively putting the protection of PII/NPI directly onto board agendas. So, while prison sentences for GLBA non-compliance are rare, accountable organizations’ board members in particular should be concerned with upholding the interests of their stakeholders via compliance.
Your Reputation Precedes You
The penalties and gravity of a cybersecurity breach that a noncompliant organization experiences can, of course, be measured. Less quantifiable though, is the reputational harm that a non-compliance lawsuit can inflict on organizations. Today’s omnipresent digital media presence ensures that word travels far and fast when organizations break the law— and as Warren Buffett famously said, “it takes 20 years to build a reputation and five minutes to ruin it.”
Reputational damage that an organization suffers as a result of any kind of lawsuit can manifest in two ways, as one business and commercial law firm notes. On the one hand, a lawsuit can hurt the organization’s reputation with the public— yet on the other hand, it can also dissuade companies from doing business with the defendant organization. A data privacy lawsuit naturally implies that an organization is either inept or apathetic in handling consumers’ data, so it could easily inflict both kinds of reputational damage.
Alliance With Compliance
The updated GLBA Safeguards Rule confirms two truths for accountable organizations. First, data privacy is a constantly evolving practice. And second, organizations cannot rest on their laurels when bringing their practices into compliance with these laws due to their evolving nature. Carrying out the necessary due diligence to ensure compliance with updated regulations is far less severe than risking penalties for noncompliance. The December 9 deadline is fast approaching. Delphix helps many banks and other covered organizations ensure compliance with a variety of data privacy-related regulations, including the GLBA Safeguards Rule, while also bolstering data security.
This article was originally published on Preludesys.
0 notes
Text
80% of data is unstructured. Manage it like a pro!
Unstructured data is a diverse array of information that is not stored in a traditional database. This includes emails, images, videos, and more. While unstructured data can be a valuable asset for businesses, it can also pose a significant challenge if it is not managed effectively.
In this webinar, industry experts will share their insights on how to manage unstructured data like a pro. They will discuss:
The different types of unstructured data and the associated risks
Strategies for identifying, organizing, and extracting value from unstructured data
Best practices for data security and compliance
This webinar is for business leaders, data professionals, and anyone who wants to learn more about how to manage unstructured data like a pro.
Register for the webinar today!
#unstructureddata #manageunstructureddata #datasecurity #dataprivacy #datagovernance #datacompliance #dataanalytics #datavisualization #datascience #bigdata
#unstructured data#data management#data visualization#data analysis#datasecurity#data compliance#data privacy#big data
0 notes
Text
What is the difference between Data Scientist and Data Engineers ?
In today’s data-driven world, organizations harness the power of data to gain valuable insights, make informed decisions, and drive innovation. Two key players in this data-centric landscape are data scientists and data engineers. Although their roles are closely related, each possesses unique skills and responsibilities that contribute to the successful extraction and utilization of data. In…
View On WordPress
#Big Data#Business Intelligence#Data Analytics#Data Architecture#Data Compliance#Data Engineering#Data Infrastructure#Data Insights#Data Integration#Data Mining#Data Pipelines#Data Science#data security#Data Visualization#Data Warehousing#Data-driven Decision Making#Exploratory Data Analysis (EDA)#Machine Learning#Predictive Analytics
1 note
·
View note
Text
Top 10 Indian Law Firms for Data Privacy and Compliance Services
In today’s digital age, data privacy, and compliance have become paramount for businesses. Many companies, especially those dealing with personal data, require expert guidance on the legal and regulatory frameworks governing data protection. This is where Indian law firms come in. With their extensive knowledge of Indian laws and regulations, they can help businesses navigate the complex landscape of data privacy and compliance. In this blog post, we’ve rounded up the top 10 Indian law firms specializing in data privacy and compliance services.
10 Law Firms in India for Data Compliance
Trilegal
Trilegal is one of the leading Indian law firms providing comprehensive legal services for data privacy and compliance. Their team of experienced lawyers offers solutions tailored to meet their client's specific needs. They have worked with high-profile clients across various industries, including telecommunications, information technology, and e-commerce.
Khaitan & Co.
Khaitan & Co. is a full-service law firm focusing on data privacy and compliance. They assist clients with regulatory compliance, data localization requirements, and data breach prevention. The firm has a team of skilled lawyers who stay up-to-date with the latest developments in data privacy laws to provide the best possible legal solutions for their clients.
Shardul Amarchand Mangaldas & Co.
Shardul Amarchand Mangaldas & Co. is one of the largest law firms in India, with an extensive team of lawyers specializing in data privacy and compliance. They provide end-to-end solutions for data protection, including privacy impact assessments, legal compliance, and data breach management. With their in-depth knowledge of the legal and regulatory landscape, the firm offers practical and effective advice to their clients.
S&R Associates
S&R Associates is a full-service law firm that provides specialized legal services for data privacy and compliance. Their team of lawyers has extensive experience advising clients on data protection regulations, compliance requirements, and data breach prevention. They work closely with their clients to provide customized solutions that meet their unique business needs.
Cyril Amarchand Mangaldas
Cyril Amarchand Mangaldas is another leading Indian law firm with a strong focus on data privacy and compliance. They provide a wide range of services, including data privacy audits, regulatory compliance, and risk management. With their deep understanding of the legal framework governing data protection, the firm can help clients develop practical solutions to meet their unique needs.
Singhania & Partners
Singhania & Partners is a full-service law firm that offers expert advice and assistance for data privacy and compliance. They work closely with their clients to understand their business needs and provide customized solutions that meet their unique requirements. The firm’s lawyers are well-versed in the legal and regulatory framework governing data protection and can provide effective solutions to meet their clients’ needs.
Nishith Desai Associates
Nishith Desai Associates is a leading law firm that provides comprehensive legal services for data privacy and compliance. With a team of experienced lawyers, they offer solutions that are customized to meet the unique needs of their clients. The firm has a deep understanding of the legal and regulatory framework governing data protection and can provide practical and effective advice to businesses operating in India.
J. Sagar Associates
J. Sagar Associates is a full-service law firm that provides specialized legal services for data privacy and compliance. Their team of lawyers has extensive experience advising clients on data protection regulations, compliance requirements, and data breach prevention. They work closely with their clients to provide customized solutions that meet their unique business needs.
Kochhar & Co.
Kochhar & Co. is a leading Indian law firm that offers a wide range of legal services, including data privacy and compliance. Their team of lawyers has in-depth knowledge of the legal and regulatory landscape governing data protection and can provide practical solutions to help clients navigate the complex regulatory environment. They work closely with their clients to understand their business needs and provide customized solutions to meet their specific requirements.
Rajah & Tann
Rajah & Tann is a leading law firm that provides comprehensive legal services for data privacy and compliance. With a team of experienced lawyers, they offer customized solutions that meet the unique needs of their clients. The firm has a deep understanding of the legal and regulatory landscape governing data protection and can provide practical and effective advice to businesses operating in India.
Conclusion:
Data privacy and compliance have become critical requirements for businesses in today’s digital age. With the increasing focus on data protection regulations, it has become essential for businesses to work with experienced legal professionals who can help them navigate the complex regulatory landscape.
The top 10 Indian law firms for data privacy and compliance services listed above have a deep understanding of the legal and regulatory framework governing data protection and can provide effective solutions to meet the unique needs of businesses operating in India.
Businesses looking for legal advice and assistance on data privacy and compliance should consider working with these experienced and reliable law firms to mitigate their risks and ensure compliance with all relevant regulations.
0 notes
Text
Seems that EA is now allegedly accused of a mayor privacy violation, having used tracking tools on The Sims FreePlay app to secretly gather and transmit players’ personal information to Facebook for advertising purposes. This data potentially includes unique Facebook IDs, which can be used to match players’ in-game activities to their individual Facebook profiles. Attorneys suspect that these potential data-sharing practices may violate a federal privacy law and are now gathering players to take action.
So, there are at least two class action against EA, because it seems to collect data from players using the Meta Pixel software to harness data from players and sell it to the Meta company, who owns Instagram, Facebook and other social networks.
It would be interesting to learn if this allegations are true and how this would be seen in the eyes of GDPR, European Regulation 679/2016, which allows the processing of personal data only with consent given by the data subjects and also in the context of (online) games.
Consent in the context of the GDPR must be understood as an unambiguous indication of an informed and freely given choice by the data subject, relating to specific processing activities. The burden of proof that these criteria are fulfilled falls upon the controller (i.e., the game developer).
Google Play list the privacy condition of EA for its games, including The Sims Freeplay. Basically EA claims to use players data only to give them "better game experience", which seems vague but not less legit. The only less transparent thing I noticed is that the instructions to opt out of targeted marketing of in-game ads are in English and not in Italian: downloading the game, players allows EA to share their account information with third-party partners to customize advertising experience, which is basically all app developers do, but it's weird that the instruction to opt out doesn't have been translated at all!
This is not the first time EA is accused of, well, unethical commercial practice, since EA has been sentenced to pay fines by Austrian (2023) and Belgian (2018) civil court, because their FIFA loot boxes violated local gambling laws.
Moreover, it's important to notice that in January 2023, the European Parliament adopted a report calling for harmonized EU rules to achieve better player protection in the online video game sector.
The Parliament called for greater transparency from developers about in-game purchases: player should be aware of the type of content before starting to play and during the game. Also, players should be informed on the probabilities in loot box mechanisms, including information in plain language about what algorithms are devised to achieve.
The Parliament further stressed that the proposed legislation should assess whether an obligation to disable in-game payments and loot boxes mechanisms by default or a ban on paid loot boxes should be proposed to protect minors, avoid the fragmentation of the single market and ensure that consumers benefit from the same level of protection, no matter of their place of residence.
The Parliament highlighted problematic practices, including exploiting cognitive biases and vulnerabilities of consumers through deceptive design and marketing, using layers of virtual currencies to mask/distort real-world monetary costs, and targeting loot boxes and manipulative practices towards minors.
#vavuskapakage#ea#electronic arts#Ea sucks#the sims freeplay#the sims franchise#data breach#privacy violations#data privacy#data protection#data processing#gdpr#gdpr compliance#mobile games#fifa#Fifa 18#loot boxes#EA is trash#EA is evil#Ea is garbage
9 notes
·
View notes
Text
I want everyone - every single one of you - to reach out to your healthcare providers and ask them what third-party tracking they use on any online portals you use to access your medical charts. I received a letter today from my local healthcare network ("local" in that they serve the entirety of the 300sq miles of Indianapolis and many of the surrounding cities) letting me know there had been a breach of my protected medical information. Everyone's, in fact. See, apparently they use both Facebook and Google tracking in their internal systems. This was never disclosed to me in the years I have been a patient there. I did not know that these megacorporations that are completely unrelated to healthcare could access my information, nor that every bit of communication I have had with my doctors over the online MyChart portal is run through their third-party tracking without my consent.
This information should be disclosed to the public long before you sign off to be a patient somewhere, but apparently it is not. So please ask your healthcare companies what tracking of your information there is in their systems, what it is for, and why it cannot be done without third parties like Facebook knowing about your reproductive healthcare. Or transition/gender healthcare. Or your HIV treatment. Demand these answers and do not take "because it's easy" as an answer. I am hoping that this will cause a class-action lawsuit, and if it does not, I will be taking legal action myself.
There is no reason third-party tracking needs to be used internally with sensitive documents. I urge you to find out what your healthcare providers are doing with your information and who else can access it. I will never be taking for granted that my medical documents will be protected, as clearly HIPAA is not enough to make these companies guarantee my privacy.
213 notes
·
View notes
Text
Demonstrating Initiative: Offering Solutions to Showcase Your Industry Expertise
One of the most compelling ways to demonstrate your industry knowledge and value as a candidate is to offer practical solutions during your interview. This could be in response to a specific problem posed by the interviewer or as part of a broader discussion about industry challenges. By presenting well-informed and feasible ideas, you not only highlight your understanding of the industry but also showcase your initiative and creativity. This proactive approach can significantly enhance your reputation as a knowledgeable and forward-thinking professional.
When discussing industry challenges, it's important to frame your solutions in a way that demonstrates both your deep understanding of the issues at hand and your ability to think strategically. For instance, if the company is facing a challenge with adapting to emerging technologies, you might suggest a phased implementation plan that includes staff training, pilot programs, and feedback loops to ensure a smooth transition. This shows that you are not only aware of the technological trends but also understand the practical steps needed to integrate them effectively.
Additionally, offering solutions based on data can be particularly impactful. If the company is struggling with predicting market trends, you could propose leveraging advanced analytics tools to gather and analyze consumer data. By demonstrating how data-driven insights can help anticipate shifts in consumer behavior or identify emerging market opportunities, you position yourself as someone who can help the company stay ahead of the curve in the digital marketing world.
It's also beneficial to consider regulatory challenges the industry might be facing. If there are upcoming changes in regulations that could impact the company, offering a compliance strategy that includes regular audits, employee training, and a robust reporting system can be highly valuable. This not only shows your awareness of the regulatory environment but also your ability to develop comprehensive strategies to navigate it.
When presenting your solutions, ensure they are specific and actionable. For example, rather than suggesting "improving customer engagement," you could propose implementing a customer relationship management (CRM) system that tracks interactions and automates follow-ups. This level of detail demonstrates that you have thought through the practicalities and feasibility of your ideas.
In conclusion, offering solutions during your interview is a powerful way to demonstrate your industry expertise and strategic thinking. By presenting informed and feasible ideas, you showcase your initiative and creativity, positioning yourself as a valuable asset who can help the company navigate its challenges and seize opportunities digital world. This proactive approach not only highlights your understanding of the industry but also cements your reputation as a forward-thinking and knowledgeable candidate.
2 notes
·
View notes
Text
Unlock Data Governance: Revolutionary Table-Level Access in Modern Platforms
Dive into our latest blog on mastering data governance with Microsoft Fabric & Databricks. Discover key strategies for robust table-level access control and secure your enterprise's data. A must-read for IT pros! #DataGovernance #Security
View On WordPress
#Access Control#Azure Databricks#Big data analytics#Cloud Data Services#Data Access Patterns#Data Compliance#Data Governance#Data Lake Storage#Data Management Best Practices#Data Privacy#Data Security#Enterprise Data Management#Lakehouse Architecture#Microsoft Fabric#pyspark#Role-Based Access Control#Sensitive Data Protection#SQL Data Access#Table-Level Security
0 notes
Text
#dsgvo#gdpr#gdpr.risk assessment#gdpr compliance#data protection#data privacy#business consulting#dsb#germany
2 notes
·
View notes
Text
Q&A: Claire Walsh on how J-PAL’s King Climate Action Initiative tackles the twin climate and poverty crises
New Post has been published on https://thedigitalinsider.com/qa-claire-walsh-on-how-j-pals-king-climate-action-initiative-tackles-the-twin-climate-and-poverty-crises/
Q&A: Claire Walsh on how J-PAL’s King Climate Action Initiative tackles the twin climate and poverty crises
The King Climate Action Initiative (K-CAI) is the flagship climate change program of the Abdul Latif Jameel Poverty Action Lab (J-PAL), which innovates, tests, and scales solutions at the nexus of climate change and poverty alleviation, together with policy partners worldwide.
Claire Walsh is the associate director of policy at J-PAL Global at MIT. She is also the project director of K-CAI. Here, Walsh talks about the work of K-CAI since its launch in 2020, and describes the ways its projects are making a difference. This is part of an ongoing series exploring how the MIT School of Humanities, Arts, and Social Sciences is addressing the climate crisis.
Q: According to the King Climate Action Initiative (K-CAI), any attempt to address poverty effectively must also simultaneously address climate change. Why is that?
A: Climate change will disproportionately harm people in poverty, particularly in low- and middle-income countries, because they tend to live in places that are more exposed to climate risk. These are nations in sub-Saharan Africa and South and Southeast Asia where low-income communities rely heavily on agriculture for their livelihoods, so extreme weather — heat, droughts, and flooding — can be devastating for people’s jobs and food security. In fact, the World Bank estimates that up to 130 million more people may be pushed into poverty by climate change by 2030.
This is unjust because these countries have historically emitted the least; their people didn’t cause the climate crisis. At the same time, they are trying to improve their economies and improve people’s welfare, so their energy demands are increasing, and they are emitting more. But they don’t have the same resources as wealthy nations for mitigation or adaptation, and many developing countries understandably don’t feel eager to put solving a problem they didn’t create at the top of their priority list. This makes finding paths forward to cutting emissions on a global scale politically challenging.
For these reasons, the problems of enhancing the well-being of people experiencing poverty, addressing inequality, and reducing pollution and greenhouse gases are inextricably linked.
Q: So how does K-CAI tackle this hybrid challenge?
A: Our initiative is pretty unique. We are a competitive, policy-based research and development fund that focuses on innovating, testing, and scaling solutions. We support researchers from MIT and other universities, and their collaborators, who are actually implementing programs, whether NGOs [nongovernmental organizations], government, or the private sector. We fund pilots of small-scale ideas in a real-world setting to determine if they hold promise, followed by larger randomized, controlled trials of promising solutions in climate change mitigation, adaptation, pollution reduction, and energy access. Our goal is to determine, through rigorous research, if these solutions are actually working — for example, in cutting emissions or protecting forests or helping vulnerable communities adapt to climate change. And finally, we offer path-to-scale grants which enable governments and NGOs to expand access to programs that have been tested and have strong evidence of impact.
We think this model is really powerful. Since we launched in 2020, we have built a portfolio of over 30 randomized evaluations and 13 scaling projects in more than 35 countries. And to date, these projects have informed the scale ups of evidence-based climate policies that have reached over 15 million people.
Q: It seems like K-CAI is advancing a kind of policy science, demanding proof of a program’s capacity to deliver results at each stage.
A: This is one of the factors that drew me to J-PAL back in 2012. I majored in anthropology and studied abroad in Uganda. From those experiences I became very passionate about pursuing a career focused on poverty reduction. To me, it is unfair that in a world full of so much wealth and so much opportunity there exists so much extreme poverty. I wanted to dedicate my career to that, but I’m also a very detail-oriented nerd who really cares about whether a program that claims to be doing something for people is accomplishing what it claims.
It’s been really rewarding to see demand from governments and NGOs for evidence-informed policymaking grow over my 12 years at J-PAL. This policy science approach holds exciting promise to help transform public policy and climate policy in the coming decades.
Q: Can you point to K-CAI-funded projects that meet this high bar and are now making a significant impact?
A: Several examples jump to mind. In the state of Gujarat, India, pollution regulators are trying to cut particulate matter air pollution, which is devastating to human health. The region is home to many major industries whose emissions negatively affect most of the state’s 70 million residents.
We partnered with state pollution regulators — kind of a regional EPA [Environmental Protection Agency] — to test an emissions trading scheme that is used widely in the U.S. and Europe but not in low- and middle-income countries. The government monitors pollution levels using technology installed at factories that sends data in real time, so the regulator knows exactly what their emissions look like. The regulator sets a cap on the overall level of pollution, allocates permits to pollute, and industries can trade emissions permits.
In 2019, researchers in the J-PAL network conducted the world’s first randomized, controlled trial of this emissions trading scheme and found that it cut pollution by 20 to 30 percent — a surprising reduction. It also reduced firms’ costs, on average, because the costs of compliance went down. The state government was eager to scale up the pilot, and in the past two years, two other cities, including Ahmedabad, the biggest city in the state, have adopted the concept.
We are also supporting a project in Niger, whose economy is hugely dependent on rain-fed agriculture but with climate change is experiencing rapid desertification. Researchers in the J-PAL network have been testing training farmers in a simple, inexpensive rainwater harvesting technique, where farmers dig a half-moon-shaped hole called a demi-lune right before the rainy season. This demi-lune feeds crops that are grown directly on top of it, and helps return land that resembled flat desert to arable production.
Researchers found that training farmers in this simple technology increased adoption from 4 percent to 94 percent and that demi-lunes increased agricultural output and revenue for farmers from the first year. K-CAI is funding a path-to-scale grant so local implementers can teach this technique to over 8,000 farmers and build a more cost-effective program model. If this takes hold, the team will work with local partners to scale the training to other relevant regions of the country and potentially other countries in the Sahel.
One final example that we are really proud of, because we first funded it as a pilot and now it’s in the path to scale phase: We supported a team of researchers working with partners in Bangladesh trying to reduce carbon emissions and other pollution from brick manufacturing, an industry that generates 17 percent of the country’s carbon emissions. The scale of manufacturing is so great that at some times of year, Dhaka (the capital of Bangladesh) looks like Mordor.
Workers form these bricks and stack hundreds of thousands of them, which they then fire by burning coal. A team of local researchers and collaborators from our J-PAL network found that you can reduce the amount of coal needed for the kilns by making some low-cost changes to the manufacturing process, including stacking the bricks in a way that increases airflow in the kiln and feeding the coal fires more frequently in smaller rather than larger batches.
In the randomized, controlled trial K-CAI supported, researchers found that this cut carbon and pollution emissions significantly, and now the government has invited the team to train 1,000 brick manufacturers in Dhaka in these techniques.
Q: These are all fascinating and powerful instances of implementing ideas that address a range of problems in different parts of the world. But can K-CAI go big enough and fast enough to take a real bite out of the twin poverty and climate crisis?
A: We’re not trying to find silver bullets. We are trying to build a large playbook of real solutions that work to solve specific problems in specific contexts. As you build those up in the hundreds, you have a deep bench of effective approaches to solve problems that can add up in a meaningful way. And because J-PAL works with governments and NGOs that have the capacity to take the research into action, since 2003, over 600 million people around the world have been reached by policies and programs that are informed by evidence that J-PAL-affiliated researchers produced. While global challenges seem daunting, J-PAL has shown that in 20 years we can achieve a great deal, and there is huge potential for future impact.
But unfortunately, globally, there is an underinvestment in policy innovation to combat climate change that may generate quicker, lower-cost returns at a large scale — especially in policies that determine which technologies get adopted or commercialized. For example, a lot of the huge fall in prices of renewable energy was enabled by early European government investments in solar and wind, and then continuing support for innovation in renewable energy.
That’s why I think social sciences have so much to offer in the fight against climate change and poverty; we are working where technology meets policy and where technology meets real people, which often determines their success or failure. The world should be investing in policy, economic, and social innovation just as much as it is investing in technological innovation.
Q: Do you need to be an optimist in your job?
A: I am half-optimist, half-pragmatist. I have no control over the climate change outcome for the world. And regardless of whether we can successfully avoid most of the potential damages of climate change, when I look back, I’m going to ask myself, “Did I fight or not?” The only choice I have is whether or not I fought, and I want to be a fighter.
#000#Abdul Latif Jameel Poverty Action Lab (J-PAL)#Africa#agriculture#air#air pollution#Anthropology#approach#Arts#Asia#biggest city#carbon#carbon emissions#career#challenge#change#cities#Cleaner industry#climate#climate change#climate crisis#coal#Collaboration#compliance#crops#cutting#data#deal#Developing countries#development
2 notes
·
View notes
Text
I work in teams with people at work and we have a few leads in the team (consulting, project management, analysis and reporting, survey and reporting platforms) and nothing fucks me off more than when I'm trying to delegate a problem to the person responsible for solving it and in response they try to give me advice for how I can solve it.... no buddy this is your problem to solve. I've already done part of your job by defining the problem you've got to solve. you can use your own brain to make decisions I promise it's not that scary.
#analysis and reporting more like nobody else wants to touch data with a bargepole even if it's literally their job#lead analyst... and lead platform analyst chaser and lead project manager wrangler and lead ethics and compliance reminderer#lead analyst like yeah you can lead a platform analyst to the qualtrics installation but you can't make them initiate testing#everyone can fuck off actually
2 notes
·
View notes