#AI management
Explore tagged Tumblr posts
diptisinghblog · 2 hours ago
Text
Unlocking Career Opportunities with PGDM Specialization in AI & ML
Tumblr media
1 note · View note
jcmarchi · 1 month ago
Text
Understanding Shadow AI and Its Impact on Your Business
New Post has been published on https://thedigitalinsider.com/understanding-shadow-ai-and-its-impact-on-your-business/
Understanding Shadow AI and Its Impact on Your Business
Tumblr media Tumblr media
The market is booming with innovation and new AI projects. It’s no surprise that businesses are rushing to use AI to stay ahead in the current fast-paced economy. However, this rapid AI adoption also presents a hidden challenge: the emergence of ‘Shadow AI.’
Here’s what AI is doing in day-to-day life:
Saving time by automating repetitive tasks.
Generating insights that were once time-consuming to uncover.
Improving decision-making with predictive models and data analysis.
Creating content through AI tools for marketing and customer service.
All these benefits make it clear why businesses are eager to adopt AI. But what happens when AI starts operating in the shadows?
This hidden phenomenon is known as Shadow AI.
What Do We Understand By Shadow AI?
Shadow AI refers to using AI technologies and platforms that haven’t been approved or vetted by the organization’s IT or security teams.
While it may seem harmless or even helpful at first, this unregulated use of AI can expose various risks and threats.
Over 60% of employees admit using unauthorized AI tools for work-related tasks. That’s a significant percentage when considering potential vulnerabilities lurking in the shadows.
Shadow AI vs. Shadow IT
The terms Shadow AI and Shadow IT might sound like similar concepts, but they are distinct.
Shadow IT involves employees using unapproved hardware, software, or services. On the other hand, Shadow AI focuses on the unauthorized use of AI tools to automate, analyze, or enhance work. It might seem like a shortcut to faster, smarter results, but it can quickly spiral into problems without proper oversight.
Risks Associated with Shadow AI
Let’s examine the risks of shadow AI and discuss why it’s critical to maintain control over your organization’s AI tools.
Data Privacy Violations
Using unapproved AI tools can risk data privacy. Employees may accidentally share sensitive information while working with unvetted applications.
Every one in five companies in the UK has faced data leakage due to employees using generative AI tools. The absence of proper encryption and oversight increases the chances of data breaches, leaving organizations open to cyberattacks.
Regulatory Noncompliance
Shadow AI brings serious compliance risks. Organizations must follow regulations like GDPR, HIPAA, and the EU AI Act to ensure data protection and ethical AI use.
Noncompliance can result in hefty fines. For example, GDPR violations can cost companies up to €20 million or 4% of their global revenue.
Operational Risks
Shadow AI can create misalignment between the outputs generated by these tools and the organization’s goals. Over-reliance on unverified models can lead to decisions based on unclear or biased information. This misalignment can impact strategic initiatives and reduce overall operational efficiency.
In fact, a survey indicated that nearly half of senior leaders worry about the impact of AI-generated misinformation on their organizations.
Reputational Damage
The use of shadow AI can harm an organization’s reputation. Inconsistent results from these tools can spoil trust among clients and stakeholders. Ethical breaches, such as biased decision-making or data misuse, can further damage public perception.
A clear example is the backlash against Sports Illustrated when it was found they used AI-generated content with fake authors and profiles. This incident showed the risks of poorly managed AI use and sparked debates about its ethical impact on content creation. It highlights how a lack of regulation and transparency in AI can damage trust.
Why Shadow AI is Becoming More Common
Let’s go over the factors behind the widespread use of shadow AI in organizations today.
Lack of Awareness: Many employees do not know the company’s policies regarding AI usage. They may also be unaware of the risks associated with unauthorized tools.
Limited Organizational Resources: Some organizations do not provide approved AI solutions that meet employee needs. When approved solutions fall short or are unavailable, employees often seek external options to meet their requirements. This lack of adequate resources creates a gap between what the organization provides and what teams need to work efficiently.
Misaligned Incentives: Organizations sometimes prioritize immediate results over long-term goals. Employees may bypass formal processes to achieve quick outcomes.
Use of Free Tools: Employees may discover free AI applications online and use them without informing IT departments. This can lead to unregulated use of sensitive data.
Upgrading Existing Tools: Teams might enable AI features in approved software without permission. This can create security gaps if those features require a security review.
Manifestations of Shadow AI
Shadow AI appears in multiple forms within organizations. Some of these include:
AI-Powered Chatbots
Customer service teams sometimes use unapproved chatbots to handle queries. For example, an agent might rely on a chatbot to draft responses rather than referring to company-approved guidelines. This can lead to inaccurate messaging and the exposure of sensitive customer information.
Machine Learning Models for Data Analysis
Employees may upload proprietary data to free or external machine-learning platforms to discover insights or trends. A data analyst might use an external tool to analyze customer purchasing patterns but unknowingly put confidential data at risk.
Marketing Automation Tools
Marketing departments often adopt unauthorized tools to streamline tasks, i.e. email campaigns or engagement tracking. These tools can improve productivity but may also mishandle customer data, violating compliance rules and damaging customer trust.
Data Visualization Tools
AI-based tools are sometimes used to create quick dashboards or analytics without IT approval. While they offer efficiency, these tools can generate inaccurate insights or compromise sensitive business data when used carelessly.
Shadow AI in Generative AI Applications
Teams frequently use tools like ChatGPT or DALL-E to create marketing materials or visual content. Without oversight, these tools may produce off-brand messaging or raise intellectual property concerns, posing potential risks to organizational reputation.
Managing the Risks of Shadow AI
Managing the risks of shadow AI requires a focused strategy emphasizing visibility, risk management, and informed decision-making.
Establish Clear Policies and Guidelines
Organizations should define clear policies for AI use within the organization. These policies should outline acceptable practices, data handling protocols, privacy measures, and compliance requirements.
Employees must also learn the risks of unauthorized AI usage and the importance of using approved tools and platforms.
Classify Data and Use Cases
Businesses must classify data based on its sensitivity and significance. Critical information, such as trade secrets and personally identifiable information (PII), must receive the highest level of protection.
Organizations should ensure that public or unverified cloud AI services never handle sensitive data. Instead, companies should rely on enterprise-grade AI solutions to provide strong data security.
Acknowledge Benefits and Offer Guidance
It is also important to acknowledge the benefits of shadow AI, which often arises from a desire for increased efficiency.
Instead of banning its use, organizations should guide employees in adopting AI tools within a controlled framework. They should also provide approved alternatives that meet productivity needs while ensuring security and compliance.
Educate and Train Employees
Organizations must prioritize employee education to ensure the safe and effective use of approved AI tools. Training programs should focus on practical guidance so that employees understand the risks and benefits of AI while following proper protocols.
Educated employees are more likely to use AI responsibly, minimizing potential security and compliance risks.
Monitor and Control AI Usage
Tracking and controlling AI usage is equally important. Businesses should implement monitoring tools to keep an eye on AI applications across the organization. Regular audits can help them identify unauthorized tools or security gaps.
Organizations should also take proactive measures like network traffic analysis to detect and address misuse before it escalates.
Collaborate with IT and Business Units
Collaboration between IT and business teams is vital for selecting AI tools that align with organizational standards. Business units should have a say in tool selection to ensure practicality, while IT ensures compliance and security.
This teamwork fosters innovation without compromising the organization’s safety or operational goals.
Steps Forward in Ethical AI Management
As AI dependency grows, managing shadow AI with clarity and control could be the key to staying competitive. The future of AI will rely on strategies that align organizational goals with ethical and transparent technology use.
To learn more about how to manage AI ethically, stay tuned to Unite.ai for the latest insights and tips.
0 notes
digitaljainy · 8 months ago
Text
AI in Digital Marketing: Revolutionizing the Industry
Tumblr media
AI integration in digital marketing is revolutionizing the field in today's fast-paced digital environment. Companies are using AI-powered tools and technology to improve client engagement, expedite marketing management, and produce better outcomes. This blog examines how artificial intelligence is transforming digital marketing and how this will affect the direction of digital marketing services going forward.
To read more about how ai integration helps in digital marketing
visit our blog
0 notes
mostlysignssomeportents · 2 months ago
Text
Bossware is unfair (in the legal sense, too)
Tumblr media
You can get into a lot of trouble by assuming that rich people know what they're doing. For example, might assume that ad-tech works – bypassing peoples' critical faculties, reaching inside their minds and brainwashing them with Big Data insights, because if that's not what's happening, then why would rich people pour billions into those ads?
https://pluralistic.net/2020/12/06/surveillance-tulip-bulbs/#adtech-bubble
You might assume that private equity looters make their investors rich, because otherwise, why would rich people hand over trillions for them to play with?
https://thenextrecession.wordpress.com/2024/11/19/private-equity-vampire-capital/
The truth is, rich people are suckers like the rest of us. If anything, succeeding once or twice makes you an even bigger mark, with a sense of your own infallibility that inflates to fill the bubble your yes-men seal you inside of.
Rich people fall for scams just like you and me. Anyone can be a mark. I was:
https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security
But though rich people can fall for scams the same way you and I do, the way those scams play out is very different when the marks are wealthy. As Keynes had it, "The market can remain irrational longer than you can remain solvent." When the marks are rich (or worse, super-rich), they can be played for much longer before they go bust, creating the appearance of solidity.
Noted Keynesian John Kenneth Galbraith had his own thoughts on this. Galbraith coined the term "bezzle" to describe "the magic interval when a confidence trickster knows he has the money he has appropriated but the victim does not yet understand that he has lost it." In that magic interval, everyone feels better off: the mark thinks he's up, and the con artist knows he's up.
Rich marks have looong bezzles. Empirically incorrect ideas grounded in the most outrageous superstition and junk science can take over whole sections of your life, simply because a rich person – or rich people – are convinced that they're good for you.
Take "scientific management." In the early 20th century, the con artist Frederick Taylor convinced rich industrialists that he could increase their workers' productivity through a kind of caliper-and-stopwatch driven choreographry:
https://pluralistic.net/2022/08/21/great-taylors-ghost/#solidarity-or-bust
Taylor and his army of labcoated sadists perched at the elbows of factory workers (whom Taylor referred to as "stupid," "mentally sluggish," and as "an ox") and scripted their motions to a fare-the-well, transforming their work into a kind of kabuki of obedience. They weren't more efficient, but they looked smart, like obedient robots, and this made their bosses happy. The bosses shelled out fortunes for Taylor's services, even though the workers who followed his prescriptions were less efficient and generated fewer profits. Bosses were so dazzled by the spectacle of a factory floor of crisply moving people interfacing with crisply working machines that they failed to understand that they were losing money on the whole business.
To the extent they noticed that their revenues were declining after implementing Taylorism, they assumed that this was because they needed more scientific management. Taylor had a sweet con: the worse his advice performed, the more reasons their were to pay him for more advice.
Taylorism is a perfect con to run on the wealthy and powerful. It feeds into their prejudice and mistrust of their workers, and into their misplaced confidence in their own ability to understand their workers' jobs better than their workers do. There's always a long dollar to be made playing the "scientific management" con.
Today, there's an app for that. "Bossware" is a class of technology that monitors and disciplines workers, and it was supercharged by the pandemic and the rise of work-from-home. Combine bossware with work-from-home and your boss gets to control your life even when in your own place – "work from home" becomes "live at work":
https://pluralistic.net/2021/02/24/gwb-rumsfeld-monsters/#bossware
Gig workers are at the white-hot center of bossware. Gig work promises "be your own boss," but bossware puts a Taylorist caliper wielder into your phone, monitoring and disciplining you as you drive your wn car around delivering parcels or picking up passengers.
In automation terms, a worker hitched to an app this way is a "reverse centaur." Automation theorists call a human augmented by a machine a "centaur" – a human head supported by a machine's tireless and strong body. A "reverse centaur" is a machine augmented by a human – like the Amazon delivery driver whose app goads them to make inhuman delivery quotas while punishing them for looking in the "wrong" direction or even singing along with the radio:
https://pluralistic.net/2024/08/02/despotism-on-demand/#virtual-whips
Bossware pre-dates the current AI bubble, but AI mania has supercharged it. AI pumpers insist that AI can do things it positively cannot do – rolling out an "autonomous robot" that turns out to be a guy in a robot suit, say – and rich people are groomed to buy the services of "AI-powered" bossware:
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
For an AI scammer like Elon Musk or Sam Altman, the fact that an AI can't do your job is irrelevant. From a business perspective, the only thing that matters is whether a salesperson can convince your boss that an AI can do your job – whether or not that's true:
https://pluralistic.net/2024/07/25/accountability-sinks/#work-harder-not-smarter
The fact that AI can't do your job, but that your boss can be convinced to fire you and replace you with the AI that can't do your job, is the central fact of the 21st century labor market. AI has created a world of "algorithmic management" where humans are demoted to reverse centaurs, monitored and bossed about by an app.
The techbro's overwhelming conceit is that nothing is a crime, so long as you do it with an app. Just as fintech is designed to be a bank that's exempt from banking regulations, the gig economy is meant to be a workplace that's exempt from labor law. But this wheeze is transparent, and easily pierced by enforcers, so long as those enforcers want to do their jobs. One such enforcer is Alvaro Bedoya, an FTC commissioner with a keen interest in antitrust's relationship to labor protection.
Bedoya understands that antitrust has a checkered history when it comes to labor. As he's written, the history of antitrust is a series of incidents in which Congress revised the law to make it clear that forming a union was not the same thing as forming a cartel, only to be ignored by boss-friendly judges:
https://pluralistic.net/2023/04/14/aiming-at-dollars/#not-men
Bedoya is no mere historian. He's an FTC Commissioner, one of the most powerful regulators in the world, and he's profoundly interested in using that power to help workers, especially gig workers, whose misery starts with systemic, wide-scale misclassification as contractors:
https://pluralistic.net/2024/02/02/upward-redistribution/
In a new speech to NYU's Wagner School of Public Service, Bedoya argues that the FTC's existing authority allows it to crack down on algorithmic management – that is, algorithmic management is illegal, even if you break the law with an app:
https://www.ftc.gov/system/files/ftc_gov/pdf/bedoya-remarks-unfairness-in-workplace-surveillance-and-automated-management.pdf
Bedoya starts with a delightful analogy to The Hawtch-Hawtch, a mythical town from a Dr Seuss poem. The Hawtch-Hawtch economy is based on beekeeping, and the Hawtchers develop an overwhelming obsession with their bee's laziness, and determine to wring more work (and more honey) out of him. So they appoint a "bee-watcher." But the bee doesn't produce any more honey, which leads the Hawtchers to suspect their bee-watcher might be sleeping on the job, so they hire a bee-watcher-watcher. When that doesn't work, they hire a bee-watcher-watcher-watcher, and so on and on.
For gig workers, it's bee-watchers all the way down. Call center workers are subjected to "AI" video monitoring, and "AI" voice monitoring that purports to measure their empathy. Another AI times their calls. Two more AIs analyze the "sentiment" of the calls and the success of workers in meeting arbitrary metrics. On average, a call-center worker is subjected to five forms of bossware, which stand at their shoulders, marking them down and brooking no debate.
For example, when an experienced call center operator fielded a call from a customer with a flooded house who wanted to know why no one from her boss's repair plan system had come out to address the flooding, the operator was punished by the AI for failing to try to sell the customer a repair plan. There was no way for the operator to protest that the customer had a repair plan already, and had called to complain about it.
Workers report being sickened by this kind of surveillance, literally – stressed to the point of nausea and insomnia. Ironically, one of the most pervasive sources of automation-driven sickness are the "AI wellness" apps that bosses are sold by AI hucksters:
https://pluralistic.net/2024/03/15/wellness-taylorism/#sick-of-spying
The FTC has broad authority to block "unfair trade practices," and Bedoya builds the case that this is an unfair trade practice. Proving an unfair trade practice is a three-part test: a practice is unfair if it causes "substantial injury," can't be "reasonably avoided," and isn't outweighed by a "countervailing benefit." In his speech, Bedoya makes the case that algorithmic management satisfies all three steps and is thus illegal.
On the question of "substantial injury," Bedoya describes the workday of warehouse workers working for ecommerce sites. He describes one worker who is monitored by an AI that requires him to pick and drop an object off a moving belt every 10 seconds, for ten hours per day. The worker's performance is tracked by a leaderboard, and supervisors punish and scold workers who don't make quota, and the algorithm auto-fires if you fail to meet it.
Under those conditions, it was only a matter of time until the worker experienced injuries to two of his discs and was permanently disabled, with the company being found 100% responsible for this injury. OSHA found a "direct connection" between the algorithm and the injury. No wonder warehouses sport vending machines that sell painkillers rather than sodas. It's clear that algorithmic management leads to "substantial injury."
What about "reasonably avoidable?" Can workers avoid the harms of algorithmic management? Bedoya describes the experience of NYC rideshare drivers who attended a round-table with him. The drivers describe logging tens of thousands of successful rides for the apps they work for, on promise of "being their own boss." But then the apps start randomly suspending them, telling them they aren't eligible to book a ride for hours at a time, sending them across town to serve an underserved area and still suspending them. Drivers who stop for coffee or a pee are locked out of the apps for hours as punishment, and so drive 12-hour shifts without a single break, in hopes of pleasing the inscrutable, high-handed app.
All this, as drivers' pay is falling and their credit card debts are mounting. No one will explain to drivers how their pay is determined, though the legal scholar Veena Dubal's work on "algorithmic wage discrimination" reveals that rideshare apps temporarily increase the pay of drivers who refuse rides, only to lower it again once they're back behind the wheel:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
This is like the pit boss who gives a losing gambler some freebies to lure them back to the table, over and over, until they're broke. No wonder they call this a "casino mechanic." There's only two major rideshare apps, and they both use the same high-handed tactics. For Bedoya, this satisfies the second test for an "unfair practice" – it can't be reasonably avoided. If you drive rideshare, you're trapped by the harmful conduct.
The final prong of the "unfair practice" test is whether the conduct has "countervailing value" that makes up for this harm.
To address this, Bedoya goes back to the call center, where operators' performance is assessed by "Speech Emotion Recognition" algorithms, a psuedoscientific hoax that purports to be able to determine your emotions from your voice. These SERs don't work – for example, they might interpret a customer's laughter as anger. But they fail differently for different kinds of workers: workers with accents – from the American south, or the Philippines – attract more disapprobation from the AI. Half of all call center workers are monitored by SERs, and a quarter of workers have SERs scoring them "constantly."
Bossware AIs also produce transcripts of these workers' calls, but workers with accents find them "riddled with errors." These are consequential errors, since their bosses assess their performance based on the transcripts, and yet another AI produces automated work scores based on them.
In other words, algorithmic management is a procession of bee-watchers, bee-watcher-watchers, and bee-watcher-watcher-watchers, stretching to infinity. It's junk science. It's not producing better call center workers. It's producing arbitrary punishments, often against the best workers in the call center.
There is no "countervailing benefit" to offset the unavoidable substantial injury of life under algorithmic management. In other words, algorithmic management fails all three prongs of the "unfair practice" test, and it's illegal.
What should we do about it? Bedoya builds the case for the FTC acting on workers' behalf under its "unfair practice" authority, but he also points out that the lack of worker privacy is at the root of this hellscape of algorithmic management.
He's right. The last major update Congress made to US privacy law was in 1988, when they banned video-store clerks from telling the newspapers which VHS cassettes you rented. The US is long overdue for a new privacy regime, and workers under algorithmic management are part of a broad coalition that's closer than ever to making that happen:
https://pluralistic.net/2023/12/06/privacy-first/#but-not-just-privacy
Workers should have the right to know which of their data is being collected, who it's being shared by, and how it's being used. We all should have that right. That's what the actors' strike was partly motivated by: actors who were being ordered to wear mocap suits to produce data that could be used to produce a digital double of them, "training their replacement," but the replacement was a deepfake.
With a Trump administration on the horizon, the future of the FTC is in doubt. But the coalition for a new privacy law includes many of Trumpland's most powerful blocs – like Jan 6 rioters whose location was swept up by Google and handed over to the FBI. A strong privacy law would protect their Fourth Amendment rights – but also the rights of BLM protesters who experienced this far more often, and with far worse consequences, than the insurrectionists.
The "we do it with an app, so it's not illegal" ruse is wearing thinner by the day. When you have a boss for an app, your real boss gets an accountability sink, a convenient scapegoat that can be blamed for your misery.
The fact that this makes you worse at your job, that it loses your boss money, is no guarantee that you will be spared. Rich people make great marks, and they can remain irrational longer than you can remain solvent. Markets won't solve this one – but worker power can.
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
2K notes · View notes
frogfacey · 4 months ago
Text
"this new generation is the dumbest and laziest ever because ai is ruining people's ability to learn!!! Why are you using ai to write emails and cover letters when you could instead LEARN this beautiful, important and valuable skill instead of growing lazy and getting ai slop to slop it for you?" oh my god. oooooh my god. oh my goooooood.
1K notes · View notes
giggso · 2 years ago
Text
Customer Support Automated ITOps - Giggso
Tumblr media
Enable our powerful omni-channel workflows to interact over multiple channels with your customers with ITSM processes for responsible AI management. Get Started Today.
Read More: https://www.giggso.com/customer-support/
0 notes
Text
Ihnmaims art I made in a car ride except they get progressively worse as I got more tired 💀
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
2K notes · View notes
cipheramnesia · 2 months ago
Text
Tumblr media
Pulling this because I don't want to get into arguing about AI with a stranger, but I do think it's very funny to see Tim Burton upset about having his style imitated. "It's nothing like when I, a human, have an entirely derivative and unoriginal style - first of all, no robot could ever reproduce my human racism" oh no Tim, the machines are racist too, don't worry.
155 notes · View notes
mawrrbid · 7 months ago
Text
Tumblr media
Made a bunch of silly doodles with Hermes and Touchstaved LI because I didn't feel like doing anything more polished.
Absolutely loved how all of them turned out, might do the same with Kreide someday.
All separate sketches under the cut. <3
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
243 notes · View notes
zytes · 5 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
8.15.2024 — 6:47pm -> 8:17pm
142 notes · View notes
okaydiscount · 16 days ago
Text
Tumblr media
h. hgere. unfinished sketch of a really stupid Rockband 2 vr but the ai is self aware au that im never gonna touch again probably. als o SORRY DR COOMERS NOT THERE just pretend hes playing the drums in the background or smth
also why rockband 2 you might ask? 1. great game that i grew up with and 2. for some reason theres an official dlc for 'still alive' from portal??? that you can play??? and its got orange box art for the album cover its pretty cool
108 notes · View notes
zkiner · 2 months ago
Text
Tumblr media
Last night i was thinking about reverse roles au, specifically about what Human!Zim occupation will be and had the funniest idea ever.
66 notes · View notes
help-alaa-childrens · 18 days ago
Text
A truce agreement has been reached to temporarily stop the WAR on Gaza, Palestine that started in October 2023 & lasted for about 470 days.
Through mediation by the United States, the European Union countries & their loyal & peace-loving peoples, Egypt & Qatar, a temporary ceasefire was reached on Gaza for 42 days as the first of three stages.
I'd like you, my friends, to continue supporting me & reach our goal so that we can get the rest of my family & children out of Gaza & travel to a safe country & start a new future.
[£31,097/£56,000]
66 notes · View notes
eastgaysian · 9 months ago
Text
dragon age origins having super customizable party ai should be standard for games with real time with pause combat but it wasnt even standard for dragon age.
220 notes · View notes
melyzard · 10 months ago
Text
Okay, look, they talk to a Google rep in some of the video clips, but I give it a pass because this FREE course is a good baseline for personal internet safety that so many people just do not seem to have anymore. It's done in short video clip and article format (the videos average about a minute and a half). This is some super basic stuff like "What is PII and why you shouldn't put it on your twitter" and "what is a phishing scam?" Or "what is the difference between HTTP and HTTPS and why do you care?"
It's worrying to me how many people I meet or see online who just do not know even these absolute basic things, who are at constant risk of being scammed or hacked and losing everything. People who barely know how to turn their own computers on because corporations have made everything a proprietary app or exclusive hardware option that you must pay constant fees just to use. Especially young, somewhat isolated people who have never known a different world and don't realize they are being conditioned to be metaphorical prey animals in the digital landscape.
Anyway, this isn't the best internet safety course but it's free and easy to access. Gotta start somewhere.
Here's another short, easy, free online course about personal cyber security (GCFGlobal.org Introduction to Internet Safety)
Bonus videos:
youtube
(Jul 13, 2023, runtime 15:29)
"He didn't have anything to hide, he didn't do anything wrong, anything illegal, and yet he was still punished."
youtube
(Apr 20, 2023; runtime 9:24 minutes)
"At least 60% use their name or date of birth as a password, and that's something you should never do."
youtube
(March 4, 2020, runtime 11:18 minutes)
"Crossing the road safely is a basic life skill that every parent teaches their kids. I believe that cyber skills are the 21st century equivalent of road safety in the 20th century."
174 notes · View notes
cozylittleartblog · 5 months ago
Note
Please come back to Deviantart and upload all your art!!!!!!!!!
Tumblr media
deviantart can suck my whole entire dick and can keep sucking it until they decide to get rid of their AI bullshit
anyway reminder that y'all should join sheezyart
my username there is cozy
138 notes · View notes