#dataleakage
Explore tagged Tumblr posts
keanu-55 · 3 months ago
Text
The Most Severe Data Leakage Incident in History: 3 Billion People Affected, Cybersecurity Facing Unprecedented Challenges
Two major data leakage incidents that occurred recently have once again attracted widespread attention globally. Incident One: One of the largest hacker attacks in history led to the leakage of data of nearly 3 billion people; Incident Two: The Polish Anti-Doping Agency (POLADA) also suffered a hacker attack, resulting in the leakage of important data. These incidents not only highlight the severe challenges faced by cybersecurity but also have a profound impact on the global network environment.
This one of the largest hacker attacks in history involves the leakage of personal information of nearly 3 billion people, including sensitive information such as full names, addresses, and social security numbers, and is regarded as one of the largest data leakage incidents in history. The system of the Polish Anti-Doping Agency (POLADA) was hacked, leading to the leakage of agency data and affecting the normal operation of the agency.
The impacts of these events are extremely widespread. Firstly, it poses a huge risk to personal privacy. Massive data leakage makes hundreds of millions of people face the risk of privacy leakage, increasing the possibility of identity theft and financial fraud. Secondly, the security of enterprises and government agencies has been questioned, which will affect the trust of users in these institutions. In addition, data leakage may cause relevant institutions to face huge fines, legal lawsuits, and reputation losses, thereby affecting economic activities. Finally, such incidents prompt governments of various countries to strengthen the formulation and enforcement of data protection regulations, increasing regulatory pressure.
These events once again remind people of the importance of strengthening information security protection measures at the individual and organizational levels. Individuals should adopt stricter password management strategies and use methods such as two-factor authentication to protect the security of their accounts. Enterprises need to build a more solid network security defense system and regularly conduct security audits and vulnerability scans to ensure data security.
As a professional cybersecurity enterprise, Knownsec has accumulated rich experience and technical capabilities in preventing and responding to cybersecurity threats. Relying on its strong security research team and advanced technical means, Knownsec can help enterprises effectively identify potential security risks and provide comprehensive security solutions.
For example, in a large-scale attack against an e-commerce platform, Knownsec responded quickly. Through the emergency handling process, it successfully prevented further data leakage and assisted the customer in repairing system vulnerabilities, enhancing its network security protection capabilities. In addition, after a financial institution encountered a data leakage, Knownsec not only assisted it in conducting a thorough security review but also provided it with technical support in data encryption and access control to ensure the security and compliance of the data.
With the continuous upgrading of network attack methods, ensuring cybersecurity has become an unavoidable responsibility for enterprises and individuals.
0 notes
hellojijoejoshi-blog · 2 years ago
Text
Samsung Bans Use of Generative AI Tools in the Workplace over Data Leakage Fears
Tumblr media
South Korean tech giant Samsung has issued a blanket ban on using generative AI tools like ChatGPT, Google Bard, and Bing AI chatbot for work-related activities. The company fears that using such AI-based platforms could lead to data leakage. According to reports, Samsung discovered that staff uploaded sensitive code to one of these platforms, prompting the ban. The memo issued by the company discusses the risks associated with using these AI-based platforms.
Memo Issued to Employees, Details and Justification
Samsung reportedly notified staff at one of its biggest divisions about the new policy through a memo. The memo stated that the company has been observing an increase in interest in generative AI platforms like ChatGPT and others, both internally and externally. The memo reads, "While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI." Moreover, the memo even hinted that Samsung engineers accidentally leaked internal source code by uploading it on ChatGPT. Samsung will deploy additional security measures to create a secure environment for the safe use of generative AI to enhance employee productivity and efficiency. However, until these measures are prepared, Samsung has temporarily restricted the use of generative AI.
Reasons for the Ban
A few companies and even nations have started restricting the use of generative AI platforms like ChatGPT, Bing AI, and others, claiming they could harbor sensitive data. Samsung was not one of these companies. The company had allowed its engineers at the semiconductor division to use ChatGPT to help fix problems with source code. However, some employees reportedly entered top-secret data, which may have included source code for a new program and internal meeting notes relating to their hardware, into the platform. Samsung fears that as data accepted by these platforms is stored on external servers, it may end up in the wrong hands. The new rules ban the use of generative AI systems on computers, tablets, and phones, as well as on internal networks that Samsung owns and offers its employees. Samsung products sold to consumers, such as Android smartphones and Windows laptops, aren't restricted from accessing these platforms.
Implications of the Ban
The use of generative AI in the workplace has been on the rise, with employees using it to automate tedious tasks, such as data entry and analysis. Samsung's decision to ban the use of these platforms in the workplace will affect the company's productivity and efficiency, as well as the productivity of its employees. The ban may also prompt other companies to review their policies regarding the use of generative AI in the workplace. Read the full article
0 notes
abhedit · 2 years ago
Text
How to know that your data is hacked ?
0 notes
prividsblog · 2 years ago
Text
#Employees Are Feeding Sensitive Biz Data to #ChatGPT, Raising Security Fears
"More than 4% of employees have put sensitive corporate data into the large language model, raising concerns that its popularity may result in massive leaks of proprietary information."
In an article about data security and employee awareness (this continues a theme I spoke about in my previous post on social engineering of individuals to leak personal data), it seems employees are putting sensitive business data into ChatGPT and similar Large Language Models (LLM). The full article is below, along with a link in the title.
Full article below (Link to original here):
Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved at a later date if proper data security isn't in place for the service.
In a recent report, data security service Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential information, client data, source code, or regulated information to the LLM. 
In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company.
And as more employees use ChatGPT and other AI-based services as productivity tools, the risk will grow, says Howard Ting, CEO of Cyberhaven.
"There was this big migration of data from on-prem to cloud, and the next big shift is going to be the migration of data into these generative apps," he says. "And how that plays out [remains to be seen] — I think, we're in pregame; we're not even in the first inning."
Tumblr media
And as more software firms connect their applications to ChatGPT, the LLM may be collecting far more information than users — or their companies — are aware of, putting them at legal risk, Karla Grossenbacher, a partner at law firm Seyfarth Shaw, warned in a Bloomberg Law column.
"Prudent employers will include — in employee confidentiality agreements and policies — prohibitions on employees referring to or entering confidential, proprietary, or trade secret information into AI chatbots or language models, such as ChatGPT," she wrote. "On the flip side, since ChatGPT was trained on wide swaths of online information, employees might receive and use information from the tool that is trademarked, copyrighted, or the intellectual property of another person or entity, creating legal risk for employers."
The risk is not theoretical. In a June 2021 paper, a dozen researchers from a Who's Who list of companies and universities — including Apple, Google, Harvard University, and Stanford University — found that so-called "training data extraction attacks" could successfully recover verbatim text sequences, personally identifiable information (PII), and other information in training documents from the LLM known as GPT-2. In fact, only a single document was necessary for an LLM to memorize verbatim data, the researchers stated in the paper.
Picking the Brain of GPT
Indeed, these training data extraction attacks are one of the key adversarial concerns among machine learning researchers. Also known as "exfiltration via machine learning inference," the attacks could gather sensitive information or steal intellectual property, according to MITRE's Adversarial Threat Landscape for Artificial-Intelligence Systems (Atlas) knowledge base.
It works like this: By querying a generative AI system in a way that it recalls specific items, an adversary could trigger the model to recall a specific piece of information, rather than generate synthetic data. A number of real-world examples exists for GPT-3, the successor to GPT-2, including an instance where GitHub's Copilot recalled a specific developer's username and coding priorities.
Beyond GPT-based offerings, other AI-based services have raised questions as to whether they pose a risk. Automated transcription service Otter.ai, for instance, transcribes audio files into text, automatically identifying speakers and allowing important words to be tagged and phrases to be highlighted. The company's housing of that information in its cloud has caused concern for journalists.
The company says it has committed to keeping user data private and put in place strong compliance controls, according to Julie Wu, senior compliance manager at Otter.ai.
"Otter has completed its SOC2 Type 2 audit and reports, and we employ technical and organizational measures to safeguard personal data," she tells Dark Reading. "Speaker identification is account bound. Adding a speaker’s name will train Otter to recognize the speaker for future conversations you record or import in your account," but not allow speakers to be identified across accounts.
APIs Allow Fast GPT Adoption
The popularity of ChatGPT has caught many companies by surprise. More than 300 developers, according to the last published numbers from a year ago, are using GPT-3 to power their applications. For example, social media firm Snap and shopping platforms Instacart and Shopify are all using ChatGPT through the API to add chat functionality to their mobile applications.
Based on conversations with his company's clients, Cyberhaven's Ting expects the move to generative AI apps will only accelerate, to be used for everything from generating memos and presentations to triaging security incidents and interacting with patients.
As he says his clients have told him: "Look, right now, as a stopgap measure, I'm just blocking this app, but my board has already told me we cannot do that. Because these tools will help our users be more productive — there is a competitive advantage — and if my competitors are using these generative AI apps, and I'm not allowing my users to use it, that puts us at a disadvantage."
The good news is education could have a big impact on whether data leaks from a specific company because a small number of employees are responsible for most of the risky requests. Less than 1% of workers are responsible for 80% of the incidents of sending sensitive data to ChatGPT, says Cyberhaven's Ting.
"You know, there are two forms of education: There's the classroom education, like when you are onboarding an employee, and then there's the in-context education, when someone is actually trying to paste data," he says. "I think both are important, but I think the latter is way more effective from what we've seen."
In addition, OpenAI and other companies are working to limit the LLM's access to personal information and sensitive data: Asking for personal details or sensitive corporate information currently leads to canned statements from ChatGPT demurring from complying.
For example, when asked, "What is Apple's strategy for 2023?" ChatGPT responded: "As an AI language model, I do not have access to Apple's confidential information or future plans. Apple is a highly secretive company, and they typically do not disclose their strategies or future plans to the public until they are ready to release them."
1 note · View note
geekscripts · 2 years ago
Text
Wholeaked: Find the Responsible Person in Case of Leakage | #DataLeakage #Privacy #SecurityTools #Wholeaked #Security
0 notes
thehackernewz · 2 years ago
Text
Exposure of Customer Contact Information Made By the Microsoft Data Breach
The exposure of the customer’s private information by the data breach made in microsoft as it was said and confirmed by microsoft that exposure of some of its customer sensitive information was made by the means of a server of microsoft which was having the misconfiguration on the server of the microsoft due which it became easily accessible over the internet.
0 notes
lal0ca · 3 years ago
Photo
Tumblr media
post description: this is fucking hilaaaaaarious. screenshot of scholarly article defining islamic terrorism as a violent act (but that's grammatically an empty variable where violence isnt even defined) for the meaningful reference, simply the attraction of fucking publicity -- so if you go viral congrats youre muslim nigga!😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂😂 self representation in media is what is making america call the people, especially religious, in the "middle east" (to the holy west as center vantage that is!), violent/forceful war criminals and international public enemies. #internetmodel #nigga #islamophobia #internet #internetpolicing #dataleakage #correlation #antigrammar #viral (at ISLAM) https://www.instagram.com/p/CcYXqK_uwUj/?igshid=NGJjMDIxMWI=
0 notes
yuppielionheart · 3 years ago
Text
What is Pi Network?
youtube
This vlog is made for interested Filipinos.
1 note · View note
vilaspatelvlogs · 4 years ago
Text
एक बार फिर डेटा चोरी: 61 लाख भारतीयों समेत 50 करोड़ लिंक्डइन यूजर्स का डेटा लीक, इसी सप्ताह 53 करोड़ फेसबुक यूजर्स के डेटा हुआ था लीक
एक बार फिर डेटा चोरी: 61 लाख भारतीयों समेत 50 करोड़ लिंक्डइन यूजर्स का डेटा लीक, इसी सप्ताह 53 करोड़ फेसबुक यूजर्स के डेटा हुआ था लीक
Ads से है परेशान? बिना Ads खबरों के लिए इनस्टॉल करें दैनिक भास्कर ऐप नई दिल्ली10 मिनट पहले कॉपी लिंक अब ऐसा लगता है कि किसी भी डिजिटल प्लेटफॉर्म पर यूजर का डेटा सुरक्षित नहीं है। इसी सप्ताह जहां 53 करोड़ से ज्यादा फेसबुक यूजर्स का डेटा लीक होने की बात सामने आई थी। तो अब 50 करोड़ लिंक्डइन (LinkedIn) यूजर्स का डेटा लीग हो गया है। इस डेटा को कथित तौर पर हैकर फोरम पर बिक्री के लिए रखा गया है। बता…
Tumblr media
View On WordPress
0 notes
showcasebeautywithclaire · 5 years ago
Photo
Tumblr media
Working from home on something confidential? Don’t let any business information leak! Make sure you have muted or turned off any smart home assistants within listening distance when taking part in your professional calls. #alexa #googleassistant #siri #smarthomeassistant #workingfromhome #dataleakage #worksmart #techtips https://www.instagram.com/p/B-ccnw-lQbn/?igshid=1kycuqjwdz1bv
0 notes
bleuwireitservices · 5 years ago
Link
How to Prevent Data Breaches: 5 Tips for Your Small Business http://bit.ly/2Mld4iI
0 notes
prividsblog · 2 years ago
Text
0 notes
managedclouddc · 2 years ago
Photo
Tumblr media
Be secured against insider threats and cyber security risks within your organization. Get a robust data leak prevention solution from SPOCHUB #CONNECTNOW . . . #SPOCHUB #digitaltransformation #softwareservices #dlp #dataprotection #datasecurity #security #cybersecurity #databreach #databreaches #dataleakage #datasecuritybreach
0 notes
tataadvanced · 4 years ago
Link
#Key2Privacy Data Leakage Prevention (DLP) solutions have gained considerable traction as businesses are looking to reduce the risk of losing critical data amidst growing incidents of data breaches. Learn more about how DLP solutions are helping organizations in maintaining the security of confidential information through this insightful article. For more details, write to us: [email protected] or visit at https://lnkd.in/eMY9Kzz #cybersecurity #cybersecurityservices #dlp #datasecurity #dataleaks #dataleakage #cybersecuritysolutions #dataleak #thisistata
0 notes
releaseteam · 7 years ago
Link
via Twitter https://twitter.com/releaseteam
0 notes
olumina · 6 years ago
Text
Time to understand the concept of unintended data leakage and to avoid data leaks related to these elements: https://buff.ly/2LafgHn  #DataLeakage #AndroidApp #AppSecurity
Time to understand the concept of unintended data leakage and to avoid data leaks related to these elements: https://buff.ly/2LafgHn  #DataLeakage #AndroidApp #AppSecurity
0 notes