#AI management
Explore tagged Tumblr posts
jcmarchi · 2 days ago
Text
Understanding Shadow AI and Its Impact on Your Business
New Post has been published on https://thedigitalinsider.com/understanding-shadow-ai-and-its-impact-on-your-business/
Understanding Shadow AI and Its Impact on Your Business
Tumblr media Tumblr media
The market is booming with innovation and new AI projects. It’s no surprise that businesses are rushing to use AI to stay ahead in the current fast-paced economy. However, this rapid AI adoption also presents a hidden challenge: the emergence of ‘Shadow AI.’
Here’s what AI is doing in day-to-day life:
Saving time by automating repetitive tasks.
Generating insights that were once time-consuming to uncover.
Improving decision-making with predictive models and data analysis.
Creating content through AI tools for marketing and customer service.
All these benefits make it clear why businesses are eager to adopt AI. But what happens when AI starts operating in the shadows?
This hidden phenomenon is known as Shadow AI.
What Do We Understand By Shadow AI?
Shadow AI refers to using AI technologies and platforms that haven’t been approved or vetted by the organization’s IT or security teams.
While it may seem harmless or even helpful at first, this unregulated use of AI can expose various risks and threats.
Over 60% of employees admit using unauthorized AI tools for work-related tasks. That’s a significant percentage when considering potential vulnerabilities lurking in the shadows.
Shadow AI vs. Shadow IT
The terms Shadow AI and Shadow IT might sound like similar concepts, but they are distinct.
Shadow IT involves employees using unapproved hardware, software, or services. On the other hand, Shadow AI focuses on the unauthorized use of AI tools to automate, analyze, or enhance work. It might seem like a shortcut to faster, smarter results, but it can quickly spiral into problems without proper oversight.
Risks Associated with Shadow AI
Let’s examine the risks of shadow AI and discuss why it’s critical to maintain control over your organization’s AI tools.
Data Privacy Violations
Using unapproved AI tools can risk data privacy. Employees may accidentally share sensitive information while working with unvetted applications.
Every one in five companies in the UK has faced data leakage due to employees using generative AI tools. The absence of proper encryption and oversight increases the chances of data breaches, leaving organizations open to cyberattacks.
Regulatory Noncompliance
Shadow AI brings serious compliance risks. Organizations must follow regulations like GDPR, HIPAA, and the EU AI Act to ensure data protection and ethical AI use.
Noncompliance can result in hefty fines. For example, GDPR violations can cost companies up to €20 million or 4% of their global revenue.
Operational Risks
Shadow AI can create misalignment between the outputs generated by these tools and the organization’s goals. Over-reliance on unverified models can lead to decisions based on unclear or biased information. This misalignment can impact strategic initiatives and reduce overall operational efficiency.
In fact, a survey indicated that nearly half of senior leaders worry about the impact of AI-generated misinformation on their organizations.
Reputational Damage
The use of shadow AI can harm an organization’s reputation. Inconsistent results from these tools can spoil trust among clients and stakeholders. Ethical breaches, such as biased decision-making or data misuse, can further damage public perception.
A clear example is the backlash against Sports Illustrated when it was found they used AI-generated content with fake authors and profiles. This incident showed the risks of poorly managed AI use and sparked debates about its ethical impact on content creation. It highlights how a lack of regulation and transparency in AI can damage trust.
Why Shadow AI is Becoming More Common
Let’s go over the factors behind the widespread use of shadow AI in organizations today.
Lack of Awareness: Many employees do not know the company’s policies regarding AI usage. They may also be unaware of the risks associated with unauthorized tools.
Limited Organizational Resources: Some organizations do not provide approved AI solutions that meet employee needs. When approved solutions fall short or are unavailable, employees often seek external options to meet their requirements. This lack of adequate resources creates a gap between what the organization provides and what teams need to work efficiently.
Misaligned Incentives: Organizations sometimes prioritize immediate results over long-term goals. Employees may bypass formal processes to achieve quick outcomes.
Use of Free Tools: Employees may discover free AI applications online and use them without informing IT departments. This can lead to unregulated use of sensitive data.
Upgrading Existing Tools: Teams might enable AI features in approved software without permission. This can create security gaps if those features require a security review.
Manifestations of Shadow AI
Shadow AI appears in multiple forms within organizations. Some of these include:
AI-Powered Chatbots
Customer service teams sometimes use unapproved chatbots to handle queries. For example, an agent might rely on a chatbot to draft responses rather than referring to company-approved guidelines. This can lead to inaccurate messaging and the exposure of sensitive customer information.
Machine Learning Models for Data Analysis
Employees may upload proprietary data to free or external machine-learning platforms to discover insights or trends. A data analyst might use an external tool to analyze customer purchasing patterns but unknowingly put confidential data at risk.
Marketing Automation Tools
Marketing departments often adopt unauthorized tools to streamline tasks, i.e. email campaigns or engagement tracking. These tools can improve productivity but may also mishandle customer data, violating compliance rules and damaging customer trust.
Data Visualization Tools
AI-based tools are sometimes used to create quick dashboards or analytics without IT approval. While they offer efficiency, these tools can generate inaccurate insights or compromise sensitive business data when used carelessly.
Shadow AI in Generative AI Applications
Teams frequently use tools like ChatGPT or DALL-E to create marketing materials or visual content. Without oversight, these tools may produce off-brand messaging or raise intellectual property concerns, posing potential risks to organizational reputation.
Managing the Risks of Shadow AI
Managing the risks of shadow AI requires a focused strategy emphasizing visibility, risk management, and informed decision-making.
Establish Clear Policies and Guidelines
Organizations should define clear policies for AI use within the organization. These policies should outline acceptable practices, data handling protocols, privacy measures, and compliance requirements.
Employees must also learn the risks of unauthorized AI usage and the importance of using approved tools and platforms.
Classify Data and Use Cases
Businesses must classify data based on its sensitivity and significance. Critical information, such as trade secrets and personally identifiable information (PII), must receive the highest level of protection.
Organizations should ensure that public or unverified cloud AI services never handle sensitive data. Instead, companies should rely on enterprise-grade AI solutions to provide strong data security.
Acknowledge Benefits and Offer Guidance
It is also important to acknowledge the benefits of shadow AI, which often arises from a desire for increased efficiency.
Instead of banning its use, organizations should guide employees in adopting AI tools within a controlled framework. They should also provide approved alternatives that meet productivity needs while ensuring security and compliance.
Educate and Train Employees
Organizations must prioritize employee education to ensure the safe and effective use of approved AI tools. Training programs should focus on practical guidance so that employees understand the risks and benefits of AI while following proper protocols.
Educated employees are more likely to use AI responsibly, minimizing potential security and compliance risks.
Monitor and Control AI Usage
Tracking and controlling AI usage is equally important. Businesses should implement monitoring tools to keep an eye on AI applications across the organization. Regular audits can help them identify unauthorized tools or security gaps.
Organizations should also take proactive measures like network traffic analysis to detect and address misuse before it escalates.
Collaborate with IT and Business Units
Collaboration between IT and business teams is vital for selecting AI tools that align with organizational standards. Business units should have a say in tool selection to ensure practicality, while IT ensures compliance and security.
This teamwork fosters innovation without compromising the organization’s safety or operational goals.
Steps Forward in Ethical AI Management
As AI dependency grows, managing shadow AI with clarity and control could be the key to staying competitive. The future of AI will rely on strategies that align organizational goals with ethical and transparent technology use.
To learn more about how to manage AI ethically, stay tuned to Unite.ai for the latest insights and tips.
0 notes
digitaljainy · 7 months ago
Text
AI in Digital Marketing: Revolutionizing the Industry
Tumblr media
AI integration in digital marketing is revolutionizing the field in today's fast-paced digital environment. Companies are using AI-powered tools and technology to improve client engagement, expedite marketing management, and produce better outcomes. This blog examines how artificial intelligence is transforming digital marketing and how this will affect the direction of digital marketing services going forward.
To read more about how ai integration helps in digital marketing
visit our blog
0 notes
mostlysignssomeportents · 1 month ago
Text
Bossware is unfair (in the legal sense, too)
Tumblr media
You can get into a lot of trouble by assuming that rich people know what they're doing. For example, might assume that ad-tech works – bypassing peoples' critical faculties, reaching inside their minds and brainwashing them with Big Data insights, because if that's not what's happening, then why would rich people pour billions into those ads?
https://pluralistic.net/2020/12/06/surveillance-tulip-bulbs/#adtech-bubble
You might assume that private equity looters make their investors rich, because otherwise, why would rich people hand over trillions for them to play with?
https://thenextrecession.wordpress.com/2024/11/19/private-equity-vampire-capital/
The truth is, rich people are suckers like the rest of us. If anything, succeeding once or twice makes you an even bigger mark, with a sense of your own infallibility that inflates to fill the bubble your yes-men seal you inside of.
Rich people fall for scams just like you and me. Anyone can be a mark. I was:
https://pluralistic.net/2024/02/05/cyber-dunning-kruger/#swiss-cheese-security
But though rich people can fall for scams the same way you and I do, the way those scams play out is very different when the marks are wealthy. As Keynes had it, "The market can remain irrational longer than you can remain solvent." When the marks are rich (or worse, super-rich), they can be played for much longer before they go bust, creating the appearance of solidity.
Noted Keynesian John Kenneth Galbraith had his own thoughts on this. Galbraith coined the term "bezzle" to describe "the magic interval when a confidence trickster knows he has the money he has appropriated but the victim does not yet understand that he has lost it." In that magic interval, everyone feels better off: the mark thinks he's up, and the con artist knows he's up.
Rich marks have looong bezzles. Empirically incorrect ideas grounded in the most outrageous superstition and junk science can take over whole sections of your life, simply because a rich person – or rich people – are convinced that they're good for you.
Take "scientific management." In the early 20th century, the con artist Frederick Taylor convinced rich industrialists that he could increase their workers' productivity through a kind of caliper-and-stopwatch driven choreographry:
https://pluralistic.net/2022/08/21/great-taylors-ghost/#solidarity-or-bust
Taylor and his army of labcoated sadists perched at the elbows of factory workers (whom Taylor referred to as "stupid," "mentally sluggish," and as "an ox") and scripted their motions to a fare-the-well, transforming their work into a kind of kabuki of obedience. They weren't more efficient, but they looked smart, like obedient robots, and this made their bosses happy. The bosses shelled out fortunes for Taylor's services, even though the workers who followed his prescriptions were less efficient and generated fewer profits. Bosses were so dazzled by the spectacle of a factory floor of crisply moving people interfacing with crisply working machines that they failed to understand that they were losing money on the whole business.
To the extent they noticed that their revenues were declining after implementing Taylorism, they assumed that this was because they needed more scientific management. Taylor had a sweet con: the worse his advice performed, the more reasons their were to pay him for more advice.
Taylorism is a perfect con to run on the wealthy and powerful. It feeds into their prejudice and mistrust of their workers, and into their misplaced confidence in their own ability to understand their workers' jobs better than their workers do. There's always a long dollar to be made playing the "scientific management" con.
Today, there's an app for that. "Bossware" is a class of technology that monitors and disciplines workers, and it was supercharged by the pandemic and the rise of work-from-home. Combine bossware with work-from-home and your boss gets to control your life even when in your own place – "work from home" becomes "live at work":
https://pluralistic.net/2021/02/24/gwb-rumsfeld-monsters/#bossware
Gig workers are at the white-hot center of bossware. Gig work promises "be your own boss," but bossware puts a Taylorist caliper wielder into your phone, monitoring and disciplining you as you drive your wn car around delivering parcels or picking up passengers.
In automation terms, a worker hitched to an app this way is a "reverse centaur." Automation theorists call a human augmented by a machine a "centaur" – a human head supported by a machine's tireless and strong body. A "reverse centaur" is a machine augmented by a human – like the Amazon delivery driver whose app goads them to make inhuman delivery quotas while punishing them for looking in the "wrong" direction or even singing along with the radio:
https://pluralistic.net/2024/08/02/despotism-on-demand/#virtual-whips
Bossware pre-dates the current AI bubble, but AI mania has supercharged it. AI pumpers insist that AI can do things it positively cannot do – rolling out an "autonomous robot" that turns out to be a guy in a robot suit, say – and rich people are groomed to buy the services of "AI-powered" bossware:
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
For an AI scammer like Elon Musk or Sam Altman, the fact that an AI can't do your job is irrelevant. From a business perspective, the only thing that matters is whether a salesperson can convince your boss that an AI can do your job – whether or not that's true:
https://pluralistic.net/2024/07/25/accountability-sinks/#work-harder-not-smarter
The fact that AI can't do your job, but that your boss can be convinced to fire you and replace you with the AI that can't do your job, is the central fact of the 21st century labor market. AI has created a world of "algorithmic management" where humans are demoted to reverse centaurs, monitored and bossed about by an app.
The techbro's overwhelming conceit is that nothing is a crime, so long as you do it with an app. Just as fintech is designed to be a bank that's exempt from banking regulations, the gig economy is meant to be a workplace that's exempt from labor law. But this wheeze is transparent, and easily pierced by enforcers, so long as those enforcers want to do their jobs. One such enforcer is Alvaro Bedoya, an FTC commissioner with a keen interest in antitrust's relationship to labor protection.
Bedoya understands that antitrust has a checkered history when it comes to labor. As he's written, the history of antitrust is a series of incidents in which Congress revised the law to make it clear that forming a union was not the same thing as forming a cartel, only to be ignored by boss-friendly judges:
https://pluralistic.net/2023/04/14/aiming-at-dollars/#not-men
Bedoya is no mere historian. He's an FTC Commissioner, one of the most powerful regulators in the world, and he's profoundly interested in using that power to help workers, especially gig workers, whose misery starts with systemic, wide-scale misclassification as contractors:
https://pluralistic.net/2024/02/02/upward-redistribution/
In a new speech to NYU's Wagner School of Public Service, Bedoya argues that the FTC's existing authority allows it to crack down on algorithmic management – that is, algorithmic management is illegal, even if you break the law with an app:
https://www.ftc.gov/system/files/ftc_gov/pdf/bedoya-remarks-unfairness-in-workplace-surveillance-and-automated-management.pdf
Bedoya starts with a delightful analogy to The Hawtch-Hawtch, a mythical town from a Dr Seuss poem. The Hawtch-Hawtch economy is based on beekeeping, and the Hawtchers develop an overwhelming obsession with their bee's laziness, and determine to wring more work (and more honey) out of him. So they appoint a "bee-watcher." But the bee doesn't produce any more honey, which leads the Hawtchers to suspect their bee-watcher might be sleeping on the job, so they hire a bee-watcher-watcher. When that doesn't work, they hire a bee-watcher-watcher-watcher, and so on and on.
For gig workers, it's bee-watchers all the way down. Call center workers are subjected to "AI" video monitoring, and "AI" voice monitoring that purports to measure their empathy. Another AI times their calls. Two more AIs analyze the "sentiment" of the calls and the success of workers in meeting arbitrary metrics. On average, a call-center worker is subjected to five forms of bossware, which stand at their shoulders, marking them down and brooking no debate.
For example, when an experienced call center operator fielded a call from a customer with a flooded house who wanted to know why no one from her boss's repair plan system had come out to address the flooding, the operator was punished by the AI for failing to try to sell the customer a repair plan. There was no way for the operator to protest that the customer had a repair plan already, and had called to complain about it.
Workers report being sickened by this kind of surveillance, literally – stressed to the point of nausea and insomnia. Ironically, one of the most pervasive sources of automation-driven sickness are the "AI wellness" apps that bosses are sold by AI hucksters:
https://pluralistic.net/2024/03/15/wellness-taylorism/#sick-of-spying
The FTC has broad authority to block "unfair trade practices," and Bedoya builds the case that this is an unfair trade practice. Proving an unfair trade practice is a three-part test: a practice is unfair if it causes "substantial injury," can't be "reasonably avoided," and isn't outweighed by a "countervailing benefit." In his speech, Bedoya makes the case that algorithmic management satisfies all three steps and is thus illegal.
On the question of "substantial injury," Bedoya describes the workday of warehouse workers working for ecommerce sites. He describes one worker who is monitored by an AI that requires him to pick and drop an object off a moving belt every 10 seconds, for ten hours per day. The worker's performance is tracked by a leaderboard, and supervisors punish and scold workers who don't make quota, and the algorithm auto-fires if you fail to meet it.
Under those conditions, it was only a matter of time until the worker experienced injuries to two of his discs and was permanently disabled, with the company being found 100% responsible for this injury. OSHA found a "direct connection" between the algorithm and the injury. No wonder warehouses sport vending machines that sell painkillers rather than sodas. It's clear that algorithmic management leads to "substantial injury."
What about "reasonably avoidable?" Can workers avoid the harms of algorithmic management? Bedoya describes the experience of NYC rideshare drivers who attended a round-table with him. The drivers describe logging tens of thousands of successful rides for the apps they work for, on promise of "being their own boss." But then the apps start randomly suspending them, telling them they aren't eligible to book a ride for hours at a time, sending them across town to serve an underserved area and still suspending them. Drivers who stop for coffee or a pee are locked out of the apps for hours as punishment, and so drive 12-hour shifts without a single break, in hopes of pleasing the inscrutable, high-handed app.
All this, as drivers' pay is falling and their credit card debts are mounting. No one will explain to drivers how their pay is determined, though the legal scholar Veena Dubal's work on "algorithmic wage discrimination" reveals that rideshare apps temporarily increase the pay of drivers who refuse rides, only to lower it again once they're back behind the wheel:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
This is like the pit boss who gives a losing gambler some freebies to lure them back to the table, over and over, until they're broke. No wonder they call this a "casino mechanic." There's only two major rideshare apps, and they both use the same high-handed tactics. For Bedoya, this satisfies the second test for an "unfair practice" – it can't be reasonably avoided. If you drive rideshare, you're trapped by the harmful conduct.
The final prong of the "unfair practice" test is whether the conduct has "countervailing value" that makes up for this harm.
To address this, Bedoya goes back to the call center, where operators' performance is assessed by "Speech Emotion Recognition" algorithms, a psuedoscientific hoax that purports to be able to determine your emotions from your voice. These SERs don't work – for example, they might interpret a customer's laughter as anger. But they fail differently for different kinds of workers: workers with accents – from the American south, or the Philippines – attract more disapprobation from the AI. Half of all call center workers are monitored by SERs, and a quarter of workers have SERs scoring them "constantly."
Bossware AIs also produce transcripts of these workers' calls, but workers with accents find them "riddled with errors." These are consequential errors, since their bosses assess their performance based on the transcripts, and yet another AI produces automated work scores based on them.
In other words, algorithmic management is a procession of bee-watchers, bee-watcher-watchers, and bee-watcher-watcher-watchers, stretching to infinity. It's junk science. It's not producing better call center workers. It's producing arbitrary punishments, often against the best workers in the call center.
There is no "countervailing benefit" to offset the unavoidable substantial injury of life under algorithmic management. In other words, algorithmic management fails all three prongs of the "unfair practice" test, and it's illegal.
What should we do about it? Bedoya builds the case for the FTC acting on workers' behalf under its "unfair practice" authority, but he also points out that the lack of worker privacy is at the root of this hellscape of algorithmic management.
He's right. The last major update Congress made to US privacy law was in 1988, when they banned video-store clerks from telling the newspapers which VHS cassettes you rented. The US is long overdue for a new privacy regime, and workers under algorithmic management are part of a broad coalition that's closer than ever to making that happen:
https://pluralistic.net/2023/12/06/privacy-first/#but-not-just-privacy
Workers should have the right to know which of their data is being collected, who it's being shared by, and how it's being used. We all should have that right. That's what the actors' strike was partly motivated by: actors who were being ordered to wear mocap suits to produce data that could be used to produce a digital double of them, "training their replacement," but the replacement was a deepfake.
With a Trump administration on the horizon, the future of the FTC is in doubt. But the coalition for a new privacy law includes many of Trumpland's most powerful blocs – like Jan 6 rioters whose location was swept up by Google and handed over to the FBI. A strong privacy law would protect their Fourth Amendment rights – but also the rights of BLM protesters who experienced this far more often, and with far worse consequences, than the insurrectionists.
The "we do it with an app, so it's not illegal" ruse is wearing thinner by the day. When you have a boss for an app, your real boss gets an accountability sink, a convenient scapegoat that can be blamed for your misery.
The fact that this makes you worse at your job, that it loses your boss money, is no guarantee that you will be spared. Rich people make great marks, and they can remain irrational longer than you can remain solvent. Markets won't solve this one – but worker power can.
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
2K notes · View notes
frogfacey · 2 months ago
Text
"this new generation is the dumbest and laziest ever because ai is ruining people's ability to learn!!! Why are you using ai to write emails and cover letters when you could instead LEARN this beautiful, important and valuable skill instead of growing lazy and getting ai slop to slop it for you?" oh my god. oooooh my god. oh my goooooood.
1K notes · View notes
giggso · 2 years ago
Text
Customer Support Automated ITOps - Giggso
Tumblr media
Enable our powerful omni-channel workflows to interact over multiple channels with your customers with ITSM processes for responsible AI management. Get Started Today.
Read More: https://www.giggso.com/customer-support/
0 notes
probablyanalienindisguise · 10 months ago
Text
Ihnmaims art I made in a car ride except they get progressively worse as I got more tired 💀
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
2K notes · View notes
cipheramnesia · 29 days ago
Text
Tumblr media
Pulling this because I don't want to get into arguing about AI with a stranger, but I do think it's very funny to see Tim Burton upset about having his style imitated. "It's nothing like when I, a human, have an entirely derivative and unoriginal style - first of all, no robot could ever reproduce my human racism" oh no Tim, the machines are racist too, don't worry.
155 notes · View notes
mawrrbid · 6 months ago
Text
Tumblr media
Made a bunch of silly doodles with Hermes and Touchstaved LI because I didn't feel like doing anything more polished.
Absolutely loved how all of them turned out, might do the same with Kreide someday.
All separate sketches under the cut. <3
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
239 notes · View notes
zytes · 4 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
8.15.2024 — 6:47pm -> 8:17pm
139 notes · View notes
eastgaysian · 7 months ago
Text
dragon age origins having super customizable party ai should be standard for games with real time with pause combat but it wasnt even standard for dragon age.
220 notes · View notes
melyzard · 8 months ago
Text
Okay, look, they talk to a Google rep in some of the video clips, but I give it a pass because this FREE course is a good baseline for personal internet safety that so many people just do not seem to have anymore. It's done in short video clip and article format (the videos average about a minute and a half). This is some super basic stuff like "What is PII and why you shouldn't put it on your twitter" and "what is a phishing scam?" Or "what is the difference between HTTP and HTTPS and why do you care?"
It's worrying to me how many people I meet or see online who just do not know even these absolute basic things, who are at constant risk of being scammed or hacked and losing everything. People who barely know how to turn their own computers on because corporations have made everything a proprietary app or exclusive hardware option that you must pay constant fees just to use. Especially young, somewhat isolated people who have never known a different world and don't realize they are being conditioned to be metaphorical prey animals in the digital landscape.
Anyway, this isn't the best internet safety course but it's free and easy to access. Gotta start somewhere.
Here's another short, easy, free online course about personal cyber security (GCFGlobal.org Introduction to Internet Safety)
Bonus videos:
youtube
(Jul 13, 2023, runtime 15:29)
"He didn't have anything to hide, he didn't do anything wrong, anything illegal, and yet he was still punished."
youtube
(Apr 20, 2023; runtime 9:24 minutes)
"At least 60% use their name or date of birth as a password, and that's something you should never do."
youtube
(March 4, 2020, runtime 11:18 minutes)
"Crossing the road safely is a basic life skill that every parent teaches their kids. I believe that cyber skills are the 21st century equivalent of road safety in the 20th century."
174 notes · View notes
zkiner · 15 days ago
Text
Tumblr media
Last night i was thinking about reverse roles au, specifically about what Human!Zim occupation will be and had the funniest idea ever.
60 notes · View notes
cozylittleartblog · 3 months ago
Note
Please come back to Deviantart and upload all your art!!!!!!!!!
Tumblr media
deviantart can suck my whole entire dick and can keep sucking it until they decide to get rid of their AI bullshit
anyway reminder that y'all should join sheezyart
my username there is cozy
137 notes · View notes
cent-scratchnsniff · 3 months ago
Text
Tumblr media
it was just going to be a few warmup doodles but then she infected the rest of the page like the ever eternal and spreading spores. hod!!! hod. hod :)
#lobotomy corporation#lobcorp#hod#hod lobcorp#lobotomy corp spoilers#I GUESS i almost forgot i drew her box form#lobcorp spoilers#and michelle actually. ..#both very tiny. itty bitty. microscopic#other sephirah there too as normal. i cant have her alone. and Angelina as well on the top patting her#i have a hard time fully capturing her for some reason. in my mind. maybe its because is the disconnected period!!! mentally#she genuinely wishes to care and be kind yet theres a dissonance with what she does..? or how it ends up being taken or what she does to en#up bringing those actions into reality. she can be forceful? wanting to have employees attend therapy sessions and meetings for suppression#tactics. which i think is also something the safety team is incharge of iirc. so that means shes doing way more that what she needs to on#her job as a sephirah. just for the sake of employees#she really does care as shes one of the only to Directly attempt to change their circumstances and quality of life and health#sure chesed doesnt punish employees when they dont do their work assigned or stress them out with work#but he doesnt actively push to attempt to make changes to aid employees besides the research perks which is to the manager#yesod IS right next to her and does also genuinely care but when it comes to employees hes distant at best when it comes to them and the#way he tries to protect them is by enforcing rules but he doesnt really create or attempt to help them like hod does#yesod is sort of a passive? way of doing it. yes he doesn make a push to enforce said rules but he doesnt make new ones. just follows what#is already there in place. hod tries to make new ways and not just for the safety of people like how yesod's has them physically fine and#not letting them over a certain threshold of mental corruption but she tries to have a program to Directly Address such a thing#its born out of care but the genuine worry of being a good person and her naivety ends up having it do more harm than good#sure there may be some employees that actually like and find it useful but so many are just accepting to their fate of Dying to where#her care seems pointless. shes a sephirah and to them a literal metal box why would they go ahead and feel bad for what an 'ai' is feeling#as she is interrupting their free time in the company#which is rude. and shit. iirc the counseling is compulsory but people go because shes a sephirah and their superior. the thought was there#but again it comes off wrong and ends up not working because shes their superior in the end#EEK!!! yeah... hod. the hod. there is WAY more but i can't fit it all here and i already typed enough
49 notes · View notes
yuseirra · 4 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
studying for the future
61 notes · View notes
brotrustmeicanwrite · 5 months ago
Text
I fucking hate AI but heavens would it be useful if it wasn't such an unethical shit show
First, just to be clear, I'm talking about actually using AI as a tool to support your writing process, not to generate soulless texts made from stolen data instead of writing yourself.
Back when ChatGPT first became available it was still pretty useless so I had a lot of time to learn about how it's made, how it works and the ethics of it before ever touching the technology. I decided pretty quickly to never use it to generate text (or images) for actual writing and art but I still wanted to experiment with what else it could do (because I'm a nosy bitch that needs to know and poke everything).
And HEAVENS was it a blessing for writing with adhd
The last time I wrote more than 200 words in a day (outside of school work obviously) was 7th grade. I wrote over 8k just in notes the day Google's "Gemini" (formerly "Bard") became available to the public.
In order to not jeopardize my existing work I decided to make a completely new story with Bard's help that wasn't linked in any way to anything I had made before. So I started with a prompt along the lines of "I need help writing a story". At first, it immediately started generating a completely random story about a green tiger but after some trial and error, I got it to instead start asking questions.
What do you want the theme of your story to be?
What genre do you want to write in?
What time period do you want your story to take place in?
Is there magic?
Are there other sentient creatures besides humans?
And so on and so forth. Until the questions became extremely specific after covering all the bases. I could tell that all I was doing was essentially talking to an amalgamation of every "how to write" blog and website you've ever seen and telling it which part I wanted to work on next but it still felt great because the AI didn't actually contribute anything besides a few suggestions of common tropes and themes here and some synonyms and related words there; I was doing all the work.
And that's the point.
Nothing in that exchange was something I couldn't easily do on my own. But what happened was that I had turned what is usually a chaotic mess of a railway network of thoughts into a clear and most importantly recorded conversation. I can sit down and answer all those questions on my own but what usually happens when I do, is that every thought I have branches out into 4-7 new ones which I then attempt to record all at once (which obviously doesn't work, yay adhd) only to end up lost in thought with maybe 20 lines of notes in total after 6 hours at the table. Alternatively, either because I get bored or just because, I get distracted by something or my own thoughts about a different unrelated topic and end up with even less.
Working within the boundaries of a conversation forces you to focus on one specific question at a time and answer it to progress. And the engagement from the back and forth is just enough entertainment to not get bored. The six hours I mentioned before is the time I spent chatting with what is essentially a glorified chatbot that day, way less time than what I spent on any other project, and yet I have more notes and a clearer image of the story than I do about any of my real work. I have a recorded train of thought.
In theory, this would also work with a real human in a real conversation but realistically only very few people have someone who would be willing to do that; I certainly don't have a someone like that. Not to mention that someone doesn't always have time. Besides that, a real human conversation involves two minds with their own ideas, both of which are trying to contribute their own thoughts and opinions equally. The type of AI chat that I experimented with, on the other hand, is essentially just the conversation you have with yourself when answering those questions, only with part of it outsourced to a computer and no one else butting into your train of thought.
On that note, I also tried to get it to critique my writing but besides fixing grammatical errors all that thing did was sing praises as if I was God. That's where you'll 100000% need humans.
tl;dr writing with AI as an assistant has basically the same effect as body doubling but it’s an unethical shit show so I’m not doing it again. Also I forgot to mention I did repeat the experiment for accuracy with different amount of spoons and it makes me extra bitter that is was very consistent
54 notes · View notes