Chapters: 2/?
Fandom: Be More Chill - Iconis/Tracz
Rating: Teen And Up Audiences
Warnings: Graphic Depictions Of Violence
Relationships: Jeremy Heere/Michael Mell, Jake Dillinger/Rich Goranski, Brooke Lohst/Chloe Valentine
Characters: Jeremy Heere, Jeremy Heere's Squip, Squip Squad Members (Be More Chill), Michael Mell, Rich Goranski, Rich Goranski's Squip, Jake Dillinger, Jake Dillinger's Squip, Christine Canigula, Christine Canigula's Squip
Additional Tags: Angst, Violence, Implied/Referenced Self-Harm, tags will be updated as story progresses, part of a series, Michael Mell-centric, Michael Mell Has a Crush on Jeremy Heere, Aromantic Asexual Christine Canigula, Asexual Michael Mell, theyre so gay, Pining, Angst with a Happy Ending, i promise it'll be happy ok, but there are other povs too
Series: Part 1 of Digital Chaos
Summary:
pt 1. michael
nobody forgave jeremy after the SQUIPcident. after all, why would they? it was his fault. now it's just him... and the voice within his head that won't quite go away.
chapter 2 is out now!!
3 notes
·
View notes
The Dystopian Dance of Human Ineptitude: How We’ve Failed to Control AI and Its Perils
Humanity, in all its self-aggrandizing glory, has once again proven that it cannot handle the power it wields. We've unlocked the potential of AI, a tool that could revolutionize countless industries and improve the quality of life worldwide. Instead, we've allowed it to become yet another weapon in the arsenal of the ignorant, the ill-intentioned, and the incompetent. The generational divide and rampant IT illiteracy among key demographics have turned what should be a leap forward into a stumbling march towards catastrophe.
The Generational and IT-Literacy Gap: Breeding Grounds for Danger
Let's start with the elephant in the room: the generational divide. We have an entire cohort of individuals who, through no fault of their own, were thrust into a world that evolved too quickly for them to keep pace. These are the same people now attempting to navigate complex AI tools without a modicum of understanding. They're not just using these tools; they're also in positions of power, regulating and legislating technologies they can't begin to comprehend. Their ignorance isn't just a personal failing—it's a societal threat.
The irony here is palpable. The same generation that once marveled at the moon landing now struggles to send an email without inadvertently clicking on a phishing link. These individuals, whose IT literacy can be generously described as rudimentary, are now responsible for making decisions about technologies that could determine the future of our species. It’s akin to handing a loaded gun to a toddler and hoping for the best.
Regulatory Paralysis: A Testament to Human Short-Sightedness
And then there’s the regulatory landscape—or rather, the lack of one. Our policymakers, many of whom belong to this aforementioned cohort, are utterly unprepared to tackle the complexities of AI. They bumble through hearings, mispronounce basic terms, and rely on tech giants to self-regulate, an oxymoron if there ever was one. Their ineptitude is not just laughable; it's dangerous. We're dealing with tools that can manipulate information on a massive scale, yet our regulatory approach is stuck in the Stone Age.
Why haven’t we implemented stringent regulations? Because doing so would require acknowledging our collective fallibility and vulnerability—traits that humanity, in its hubristic splendor, refuses to accept. Instead, we prefer to believe that we remain in control, that our creations will never outstrip our ability to manage them. This is not just naive; it’s suicidal.
The Need for Draconian Measures: Regulate AI Like WMDs
Given the potential for AI to cause widespread harm, it's time we start treating it with the seriousness it deserves. AI tools, especially those with capabilities in digital art, information dissemination, and autonomous decision-making, should be regulated as strictly as weapons of mass destruction. The potential for mass disinformation and societal destabilization is not hypothetical; it’s already happening.
We need an entirely new regulatory framework, one that encompasses every conceivable application of AI. This includes:
Digital Art: AI-driven art tools can create realistic images and videos that can be used to spread misinformation. These tools should require certification and licensing to ensure they’re used responsibly.
Journalism and Media: AI in newsrooms can amplify biases and create echo chambers. Strict oversight is needed to maintain journalistic integrity and prevent the spread of fake news.
Marketing: AI tools can manipulate consumer behavior in unprecedented ways. Regulations must ensure ethical practices and prevent exploitation.
Scientific Research: AI can process vast amounts of data but can also perpetuate errors and biases. Rigorous peer review and validation processes are essential.
Sociopolitical Applications: AI in governance and policy-making must be transparent and accountable to prevent misuse.
Human Fallibility: The Ultimate Obstacle
Ultimately, the greatest obstacle to effective AI regulation is human fallibility itself. We are a species that struggles with foresight, easily swayed by short-term gains and immediate gratifications. Our systems of governance are slow to adapt, mired in bureaucracy and outdated thinking. The very traits that have allowed us to dominate the planet—curiosity, ambition, the drive to innovate—now threaten to be our undoing if we cannot temper them with wisdom and caution.
In the end, our arrogance and ignorance may very well lead to our downfall. We’ve created tools that could surpass our control, yet we continue to stumble forward, blind to the dangers. Unless we confront our shortcomings and implement drastic measures to regulate AI, we’re not just playing with fire; we’re dancing on the edge of a volcano, blissfully unaware that the ground beneath us is about to give way.
So, here we stand, on the precipice of a new era, armed with technologies we neither fully understand nor control, and governed by individuals who are as clueless as they are confident. It’s a recipe for disaster, a testament to our collective hubris, and a sobering reminder that, despite all our advancements, we remain our own worst enemy.
1 note
·
View note