#Generative AI Misconceptions
Explore tagged Tumblr posts
Text
Unlock the truth about Generative AI. Learn the facts behind 10 common myths and misconceptions to navigate the AI landscape with confidence.
#AI Misconceptions#AI Myths Busted#AI Myths Debunked#Concept of Generative AI#Generative AI#Generative AI Misconceptions#Generative AI Myths#Misconceptions About AI
0 notes
Text
Unlock the truth about Generative AI. Learn the facts behind 10 common myths and misconceptions to navigate the AI landscape with confidence.
#AI Misconceptions#AI Myths Busted#AI Myths Debunked#Concept of Generative AI#Generative AI#Generative AI Misconceptions#Generative AI Myths#Misconceptions About AI
0 notes
Text
10 Generative AI Myths You Need to Stop Believing Now
Original source : 10 Generative AI Myths You Need to Stop Believing Now
Many people are already familiar with the concept of generative AI, yet, there are still numerous myths and misconceptions connected with it. However, being aware of the reality that lies behind such opportunities is indeed essential to work with all these features in appropriate ways.
Here, ten myths regarding generative AI are exposed to assist people dealing with reality and fake information.
1. AI Can Do Everything Humans Can Do (And Better)
Without a doubt, the most common myth around AI is that it can replicate all the activities that people can and even exceeds their capability levels. Despite appreciable advances in forms of applications embracing generative AI that includes language and vision, and even artistic products, an AI system is a tool developed by man. Artificial intelligence is not as holistic as human intelligence, it does not have personal insight, emotions, and self-awareness.
AI used effectively for tasks that are clearly defined. For example, it can process large chunks of information in a short span of time than it would take any person, thus helpful in areas like data analysis and forecasting. But it is weak at solving problems that involve practical reasoning, moral reasoning, or understanding of contingencies. Generative AI can create text, images from the learned patterns from the provided data but it does not comprehend the content as a human does.
2. AI Writing is Automatically Plagiarism-Free
Another myth is that AI writing does not have plagiarism since what the system produces is original. However, as we have seen, generative AI works on the availability of current raw data to generate content, it cannot create new text. It implies that one is never certain that AI is not regurgitating fragments of the data set input to it, meaning questions relating to originality and plagiarism could arise.
AI in content generation, thus, needs to have well-developed checks on originality to ensure the content produced is original and not plagiarized. There is always some use for programs such as plagiarism detectors, but they should always be reviewed by a person. For the same reason, the training data needs to be selected with equal attention in order to avoid reproducing someone else’s efforts. It is important to understand such limitations to achieve a reasonable level of AI application in content creation.
3. AI Completely Replaces Human Creativity
Another myth is that AI affords to take over creativity by humans. Thus, AI can support and improve creative tasks, but it will never possess the creativity that is inherent in human beings. Creativity goes beyond ideas of placing together of different ideas to form new compositions; the same encompasses emotional aspects, cultural aspects and the aspects of innovation that center on the experiences of human beings.
Referring to music, art or written text, generative AI is as creative as a parrot, in the sense that it will recreate creativity that has been passed through it by using patterns which have been fed into it. It doesn’t come with the purpose of creating. On the contrary, AI can be perceived as a tool that can supplement human imagination by giving more ideas, more time, and the means for the idea manipulation.
4. AI is Unbiased and Objective
One more myth is that of AI being completely neutral and free from an opinion and prejudice. Actually, any AI system captures the existing biases and discrimination in the data used for training and the algorithms employed in a system’s design. By definition, an AI system is as good or as bad as the data that is given to it as input; therefore, if the data fed into an AI system is prejudiced, the outcome will also be prejudiced. This is a problem especially on profiles that require sensitive decision making such as hiring, police force, lending among others.
It is important to select training data that is diverse and inclusive, perform bias checks on AI regularly, and incorporate fairness constraints in AI algorithms. These issues have to be dealt with and addressed so that through transparency and accountability in the development and deployment of AI, AI systems used are fair and equitable.
5. AI will Take All Our Jobs
One of the main issues is the idea that AI will take our jobs. To some extent it is true that through the introduction of innovative technologies such as AI and automation they spark threats of job loss; however, it is equally important to note that they are more of a job reinvention process. In the human-robot interaction, AI is useful as it can perform routine and uninteresting activities that do not require human creativity.
In the past, people highlighted the fact that generation of technology leads to emergence of new employment forms while leaving some of the existing positions without demand. The major challenge therefore lies in how the workers are going to undergo a transformation by developing new skills that are inline with AI. Continued and expanded access to education and training that relates to the emerging areas of digital literacy, AI, and data science will have to be ensured in the near future.
6. AI is a Silver Bullet for All Your Content Needs
Some think that with the help of AI one can face no difficulties and overcome any hurdles in content creation. AI can truly improve the content generation process but that does not mean it is the universal solutions. Many things which an AI creates are to be reviewed by a human being in order to avoid mistakes, update and improve the quality. Also, AI is unable to grasp context and subtlety, which are critical for producing quality and valuable content.
AI also has an ability to write first drafts, provide suggestions and even promote content with help of SEO. But further adjustment and enhancement in the content can only be done by human inputs so as to meet the exact standard and effectively appeal to the audience.
7. AI Can Fully Understand and Replicate Human Emotions
Among them let me mention another one — AI can capture and mimic human feelings. Thus it is seen that though there can be an analysis of emotional signals and a response that appears sympathetic, there is no feeling. AI can be designed to identify signs of emotions in people’s actions and words, but it does not mean that it really knows or can feel emotions.
Affective Computing or the Emotional AI is the branch of artificial intelligence that is focused on making human-computer interactions dependent on the emotions. However, these systems work on predefined rules and defined data pattern, which gives them no emotional intelligence like that in the human heart. It means that AI can only imitate the emotions but cannot replace the feeling that people in the same mood can share.
8. AI is Completely Secure and Trustworthy
Believing that AI is fully safe and trustworthy is a misconception that one should better avoid. There are several security risks associated with integrated AI systems, such as attacks, hacking, and misuse. The issue of security is fundamental while deploying the AI system to ensure that the system is protected both from an external and internal attack.
AI developers and users should have appropriate security measures like encryption, auditing, and monitoring that should be put in place all the time. Further, there is a need for ethical standards and legal frameworks on the use of AI to encourage its utilization in a proper, accountable manner. It is worth remembering that trust can only be gradually established through constant engagement in addressing security and ethics issues.
9. AI is Infallible and Always Accurate
The other myth is that the AI is always perfect and does not make any mistakes. Thus, it might be asserted that despite of the high level of accuracy AI could produce a great number of mistakes. AI systems can also error, for instance, because of the lack of a sufficient amount of training examples, existence of some algorithm defects, or unpredictable events. There are problems when relying on the results of AI without people’s intervention or monitoring.
But it is important to understand that AI is a tool that can strengthen human capacities, not remove them. AI solutions require human experience and decisions for the validation of the application outputs for accuracy and reliability. The awareness of AI’s drawbacks is useful for decision-making about AI’s best application.
10. AI is Only for Tech-Savvy Experts
The last myth is that only people with high IT skills can implement AI. However, creating and implementing the sophisticated AI systems necessitates a certain level of technical expertise; yet, some of the AI tools and applications are built to suit everyone’s needs. AI technologies can be introduced and implemented by a wider audience because of friendly interfaces, ready-made models, and documentation that can be easily read by Non-IT specialists.
Business entities and other users do not need a strong technical background when interacting with AI technologies and rather use various AI platforms and automated machine learning tools, as well as applications. These tools bring artificial intelligence to the minds of more people so that a wider circle of individuals can try using AI for different tasks.
Conclusion: Separating Generative AI Fact from Fiction
Generative AI is one of the most influential technologies today, yet, it is important to cut through myth and hype with the possibilities. It shows information about myths and facts about AI aiming to provide the people with reasonable expectations and actually try to put its abilities to use in a proper and safe way. However, understanding AI’s capabilities and drawbacks is the key to using this powerful invention to help enhance human creativity, reduce costs and optimize the way to develop new products and services while contemplating AI’s ethical and security issues.
If you want to read the full blog then click here: 10 Generative AI Myths You Need to Stop Believing Now
#AI Misconceptions#AI Myths Busted#AI Myths Debunked#Concept of Generative AI#Generative AI#Generative AI Misconceptions#Generative AI Myths#Misconceptions About AI
0 notes
Text
pulling out a section from this post (a very basic breakdown of generative AI) for easier reading;
AO3 and Generative AI
There are unfortunately some massive misunderstandings in regards to AO3 being included in LLM training datasets. This post was semi-prompted by the ‘Knot in my name’ AO3 tag (for those of you who haven’t heard of it, it’s supposed to be a fandom anti-AI event where AO3 writers help “further pollute” AI with Omegaverse), so let’s take a moment to address AO3 in conjunction with AI. We’ll start with the biggest misconception:
1. AO3 wasn’t used to train generative AI.
Or at least not anymore than any other internet website. AO3 was not deliberately scraped to be used as LLM training data.
The AO3 moderators found traces of the Common Crawl web worm in their servers. The Common Crawl is an open data repository of raw web page data, metadata extracts and text extracts collected from 10+ years of web crawling. Its collective data is measured in petabytes. (As a note, it also only features samples of the available pages on a given domain in its datasets, because its data is freely released under fair use and this is part of how they navigate copyright.) LLM developers use it and similar web crawls like Google’s C4 to bulk up the overall amount of pre-training data.
AO3 is big to an individual user, but it’s actually a small website when it comes to the amount of data used to pre-train LLMs. It’s also just a bad candidate for training data. As a comparison example, Wikipedia is often used as high quality training data because it’s a knowledge corpus and its moderators put a lot of work into maintaining a consistent quality across its web pages. AO3 is just a repository for all fanfic -- it doesn’t have any of that quality maintenance nor any knowledge density. Just in terms of practicality, even if people could get around the copyright issues, the sheer amount of work that would go into curating and labeling AO3’s data (or even a part of it) to make it useful for the fine-tuning stages most likely outstrips any potential usage.
Speaking of copyright, AO3 is a terrible candidate for training data just based on that. Even if people (incorrectly) think fanfic doesn’t hold copyright, there are plenty of books and texts that are public domain that can be found in online libraries that make for much better training data (or rather, there is a higher consistency in quality for them that would make them more appealing than fic for people specifically targeting written story data). And for any scrapers who don’t care about legalities or copyright, they’re going to target published works instead. Meta is in fact currently getting sued for including published books from a shadow library in its training data (note, this case is not in regards to any copyrighted material that might’ve been caught in the Common Crawl data, its regarding a book repository of published books that was scraped specifically to bring in some higher quality data for the first training stage). In a similar case, there’s an anonymous group suing Microsoft, GitHub, and OpenAI for training their LLMs on open source code.
Getting back to my point, AO3 is just not desirable training data. It’s not big enough to be worth scraping for pre-training data, it’s not curated enough to be considered for high quality data, and its data comes with copyright issues to boot. If LLM creators are saying there was no active pursuit in using AO3 to train generative AI, then there was (99% likelihood) no active pursuit in using AO3 to train generative AI.
AO3 has some preventative measures against being included in future Common Crawl datasets, which may or may not work, but there’s no way to remove any previously scraped data from that data corpus. And as a note for anyone locking their AO3 fics: that might potentially help against future AO3 scrapes, but it is rather moot if you post the same fic in full to other platforms like ffn, twitter, tumblr, etc. that have zero preventative measures against data scraping.
2. A/B/O is not polluting generative AI
…I’m going to be real, I have no idea what people expected to prove by asking AI to write Omegaverse fic. At the very least, people know A/B/O fics are not exclusive to AO3, right? The genre isn’t even exclusive to fandom -- it started in fandom, sure, but it expanded to general erotica years ago. It’s all over social media. It has multiple Wikipedia pages.
More to the point though, omegaverse would only be “polluting” AI if LLMs were spewing omegaverse concepts unprompted or like…associated knots with dicks more than rope or something. But people asking AI to write omegaverse and AI then writing omegaverse for them is just AI giving people exactly what they asked for. And…I hate to point this out, but LLMs writing for a niche the LLM trainers didn’t deliberately train the LLMs on is generally considered to be a good thing to the people who develop LLMs. The capability to fill niches developers didn’t even know existed increases LLMs’ marketability. If I were a betting man, what fandom probably saw as a GOTCHA moment, AI people probably saw as a good sign of LLMs’ future potential.
3. Individuals cannot affect LLM training datasets.
So back to the fandom event, with the stated goal of sabotaging AI scrapers via omegaverse fic.
…It’s not going to do anything.
Let’s add some numbers to this to help put things into perspective:
LLaMA’s 65 billion parameter model was trained on 1.4 trillion tokens. Of that 1.4 trillion tokens, about 67% of the training data was from the Common Crawl (roughly ~3 terabytes of data).
3 terabytes is 3,000,000,000 kilobytes.
That’s 3 billion kilobytes.
According to a news article I saw, there has been ~450k words total published for this campaign (*this was while it was going on, that number has probably changed, but you’re about to see why that still doesn’t matter). So, roughly speaking, ~450k of text is ~1012 KB (I’m going off the document size of a plain text doc for a fic whose word count is ~440k).
So 1,012 out of 3,000,000,000.
Aka 0.000034%.
And that 0.000034% of 3 billion kilobytes is only 2/3s of the data for the first stage of training.
And not to beat a dead horse, but 0.000034% is still grossly overestimating the potential impact of posting A/B/O fic. Remember, only parts of AO3 would get scraped for Common Crawl datasets. Which are also huge! The October 2022 Common Crawl dataset is 380 tebibytes. The April 2021 dataset is 320 tebibytes. The 3 terabytes of Common Crawl data used to train LLaMA was randomly selected data that totaled to less than 1% of one full dataset. Not to mention, LLaMA’s training dataset is currently on the (much) larger size as compared to most LLM training datasets.
I also feel the need to point out again that AO3 is trying to prevent any Common Crawl scraping in the future, which would include protection for these new stories (several of which are also locked!).
Omegaverse just isn’t going to do anything to AI. Individual fics are going to do even less. Even if all of AO3 suddenly became omegaverse, it’s just not prominent enough to influence anything in regards to LLMs. You cannot affect training datasets in any meaningful way doing this. And while this might seem really disappointing, this is actually a good thing.
Remember that anything an individual can do to LLMs, the person you hate most can do the same. If it were possible for fandom to corrupt AI with omegaverse, fascists, bigots, and just straight up internet trolls could pollute it with hate speech and worse. AI already carries a lot of biases even while developers are actively trying to flatten that out, it’s good that organized groups can’t corrupt that deliberately.
#generative ai#pulling this out wasnt really prompted by anything specific#so much as heard some repeated misconceptions and just#sighs#nope#incorrect#u got it wrong#sorry#unfortunately for me: no consistent tag to block#sigh#ao3
101 notes
·
View notes
Text
to me, the question of whether hera would want a body is first and foremost a question of autonomy and ability. she has an internal self-image, i think it's meaningful that the most pivotal moments in her character arc take place in spaces where she can be perceived the way she perceives herself and interact with others in a (relatively) equal and physical capacity, and that's worth considering. but i don't think it's about how she looks, or even who she is - and i think she's the same person either way; she's equally human without a body, and having a body wouldn't make her lived experience as an AI magically disappear - so much as it's about how she would want to live.
like most things with hera, i'm looking at this through a dual lens of disability and transness, both perspectives from which the body - and particularly disconnect from the body - is a concern. the body as the mechanism by which she's able to interact with the world; understanding her physical isolation as a product of her disability, the body as a disability aid. the body as it relates to disability, in constant negotiation. the body as an expression of medical transition, of self-determination, of choice. as a statement of how she wants to be seen, how she wants to navigate the world, and at the same time reckoning with the inevitable gap between an idealized self-image and a lived reality, especially after a long time spent believing that self-image could never be visible to anyone else.
it's critical to me that it should never imply hera's disability is 'fixed' by having a body, only that it enables her to interact with the world in ways she otherwise couldn't. her fears about returning to earth are about safety and ability; the form she exists in dictates the life she's allowed to lead and has allowed people to invade her privacy and make choices for her. dysphoria and disability both contribute to disembodiment - in an increasingly digitized world, the type of alienation that feels like your life can only exist in a virtual space... maybe there's something about the concept of AI embodiment, in particular as it relates to hera, that appeals to me because of what it challenges about what makes a 'real woman.' when it's about perception, about how others see her and how she might observe / be impacted by how she's treated differently, even subconsciously. it's about feeling more present in her life and interfacing with the world. but it's not in itself a becoming; it doesn't change how she's been shaped by her history or who she is as a person.
i think it comes back to the 'big picture' as a central antagonistic force in wolf 359, and how - in that context, in this story - it adds a weight to this hypothetical choice. hera is everywhere, and she's never really anywhere. she's got access to more knowledge than most people could imagine, but it's all theoretical or highly situational; she doesn't have the same life experiences as her peers. she has the capacity to understand that 'big picture' better than most people, but whatever greater portion of the universe she understands is nothing next to infinity and meaningless without connection and context. it's interesting to me that hera is one of the most self-focused and introspective people on the show. her loyalties and decisions are absolute, personal, emotionally driven. she's lonely; she always feels physically away from the others. she misremembers herself sitting at the table with the rest of the crew. she imagines what the ocean is like. there's nothing to say that hera having a body is the only solution for that, but i like what it represents, and i honestly believe it'd make her happier than the alternatives. if there's something to a symbolically narrowed focus that allows for a more solid sense of self... that maybe the way to make something of such a big, big universe is to find a tiny portion of it that's yours and hold onto it tight.
#wolf 359#w359#hera wolf 359#idk. processing something. as always i have more to say but it's impossible to communicate all at once#it's a meaningful idea to me and i think there's a LOT more that can be done with it thematically than just. the assumption of normalcy#so much of hera's existence is about feeling trapped and that's only going to get worse on earth and within these two contexts#that's something i really feel for. especially with. mmm.#i don't like the idea that who hera is is tied to the way she exists because it seems to weirdly reinforce her own misconception#that there can never be another life for her.#and all of these things are specific to hera and to the themes of wolf 359 and NOT about AI characters in general#in other stories there are other considerations.#the best argument i can make against it is that she says getting visuals from one place is weird and she doesn't like it. but that's#a totally different situation where it's a further limitation of her ability without a trade off. it's a different consideration i think#when it allows her more freedom. to go somewhere and be completely alone by herself. to feel like she has more control and more privacy#to be able to hug her friends. or feel the rain. it would be one thing if she felt content existing 'differently'#but she... doesn't. canonically she doesn't. and i think that has to be taken into account.#i think you can tell a meaningful and positive story about disability without giving her physical form on earth too#but i think it has to be considered that those are limitations for her and that the way she exists feels isolating to her.#idk. a lot of the suggestions people come up with feel like they're coming from a place of compromise that i don't think is necessary#there are plenty of ways that having a body would be difficult for hera and i guess it's hopeful to me to think#maybe she'd still find it worth it.
165 notes
·
View notes
Text
Something went right wrong 🤣.
I still would though. Might be a little bit challenging but I’m sure I could rise to the occasion!
#ai art#ai generated#ai artwork#ai girls#android#gynoid#steampunk#robots#ai misconceptions#ai mistakes#two heads#two headed#I still would
3 notes
·
View notes
Text
It's hilarious how nanowrimo fails at understanding what their event is about. Thinking that writing is about filling pages with a certain number of words is a misconception about what writing actually is, one you usually only see from techbros.
But here they are, from the event of writing a story in 30 days, telling you that you can just prompt a generative AI and let it spit out 50k in a few seconds and that is somehow just like writing 🤦♀️
2K notes
·
View notes
Text
As national legislation on deepfake pornography crawls its way through Congress, states across the country are trying to take matters into their own hands. Thirty-nine states have introduced a hodgepodge of laws designed to deter the creation of nonconsensual deepfakes and punish those who make and share them.
Earlier this year, Democratic congresswoman Alexandria Ocasio-Cortez, herself a victim of nonconsensual deepfakes, introduced the Disrupt Explicit Forged Images and Non-Consensual Edits Act, or Defiance Act. If passed, the bill would allow victims of deepfake pornography to sue as long as they could prove the deepfakes had been made without their consent. In June, Republican senator Ted Cruz introduced the Take It Down Act, which would require platforms to remove both revenge porn and nonconsensual deepfake porn.
Though there’s bilateral support for many of these measures, federal legislation can take years to make it through both houses of Congress before being signed into law. But state legislatures and local politicians can move faster—and they’re trying to.
Last month, San Francisco City Attorney David Chiu’s office announced a lawsuit against 16 of the most visited websites that allow users to create AI-generated pornography. “Generative AI has enormous promise, but as with all new technologies, there are unintended consequences and criminals seeking to exploit the new technology. We have to be very clear that this is not innovation—this is sexual abuse,” Chiu said in a statement released by his office at the time.
The suit was just the latest attempt to try to curtail the ever-growing issue of nonconsensual deepfake pornography.
“I think there's a misconception that it's just celebrities that are being affected by this,” says Ilana Beller, organizing manager at Public Citizen, which has been tracking nonconsensual deepfake legislation and shared their findings with WIRED. “It's a lot of everyday people who are having this experience.”
Data from Public Citizen shows that 23 states have passed some form of nonconsensual deepfake law. “This is such a pervasive issue, and so state legislators are seeing this as a problem,” says Beller. “I also think that legislators are interested in passing AI legislation right now because we are seeing how fast the technology is developing.”
Last year, WIRED reported that deepfake pornography is only increasing, and researchers estimate that 90 percent of deepfake videos are of porn, the vast majority of which is nonconsensual porn of women. But despite how pervasive the issue is, Kaylee Williams, a researcher at Columbia University who has been tracking nonconsensual deepfake legislation, says she has seen legislators more focused on political deepfakes.
“More states are interested in protecting electoral integrity in that way than they are in dealing with the intimate image question,” she says.
Matthew Bierlein, a Republican state representative in Michigan, who cosponsored the state’s package of nonconsensual deepfake bills, says that he initially came to the issue after exploring legislation on political deepfakes. “Our plan was to make [political deepfakes] a campaign finance violation if you didn’t put disclaimers on them to notify the public.” Through his work on political deepfakes, Bierlein says, he began working with Democratic representative Penelope Tsernoglou, who helped spearhead the nonconsensual deepfake bills.
At the time in January, nonconsensual deepfakes of Taylor Swift had just gone viral, and the subject was widely covered in the news. “We thought that the opportunity was the right time to be able to do something,” Beirlein says. And Beirlein says that he felt Michigan was in the position to be a regional leader in the Midwest, because, unlike some of its neighbors, it has a full-time legislature with well-paid staffers (most states don’t). “We understand that it's a bigger issue than just a Michigan issue. But a lot of things can start at the state level,” he says. “If we get this done, then maybe Ohio adopts this in their legislative session, maybe Indiana adopts something similar, or Illinois, and that can make enforcement easier.”
But what the penalties for creating and sharing nonconsensual deepfakes are—and who is protected—can vary widely from state to state. “The US landscape is just wildly inconsistent on this issue,” says Williams. “I think there's been this misconception lately that all these laws are being passed all over the country. I think what people are seeing is that there have been a lot of laws proposed.”
Some states allow for civil and criminal cases to be brought against perpetrators, while others might only provide for one of the two. Laws like the one that recently took effect in Mississippi, for instance, focus on minors. Over the past year or so, there have been a spate of instances of middle and high schoolers using generative AI to make explicit images and videos of classmates, particularly girls. Other laws focus on adults, with legislators essentially updating existing laws banning revenge porn.
Unlike laws that focus on nonconsensual deepfakes of minors, on which Williams says there is a broad consensus that there they are an “inherent moral wrong,” legislation around what is “ethical” when it comes to nonconsensual deepfakes of adults is “squishier.” In many cases, laws and proposed legislation require proving intent, that the goal of the person making and sharing the nonconsensual deepfake was to harm its subject.
But online, says Sara Jodka, an attorney who specializes in privacy and cybersecurity, this patchwork of state-based legislation can be particularly difficult. “If you can't find a person behind an IP address, how can you prove who the person is, let alone show their intent?”
Williams also notes that in the case of nonconsensual deepfakes of celebrities or other public figures, many of the creators don’t necessarily see themselves as doing harm. “They’ll say, ‘This is fan content,’ that they admire this person and are attracted to them,” she says.
State laws, Jobka says, while a good start, are likely to have limited power to actually deal with the issue, and only a federal law against nonconsensual deepfakes would allow for the kind of interstate investigations and prosecutions that could really force justice and accountability. “States don't really have a lot of ability to track down across state lines internationally,” she says. “So it's going to be very rare, and it's going to be very specific scenarios where the laws are going to be able to even be enforced.”
But Michigan’s Bierlein says that many state representatives are not content to wait for the federal government to address the issue. Bierlein expressed particular concern about the role nonconsensual deepfakes could play in sextortion scams, which the FBI says have been on the rise. In 2023, a Michigan teen died by suicide after scammers threatened to post his (real) intimate photos online. “Things move really slow on a federal level, and if we waited for them to do something, we could be waiting a lot longer,” he says.
96 notes
·
View notes
Text
"The Judges, both of the supreme and inferior Courts, shall hold their Offices during good Behaviour, and shall, at stated Times, receive for their Services, a Compensation, which shall not be diminished during their Continuance in Office." (Article III, Section 1) [emphasis added]
Alexander Hamilton would be outraged to know that the current Supreme Court justices assume the Constitution gives them lifetime appointments — regardless of their behavior. He wouldn’t understand how any justice could overlook Article III, Section I that states that judges and justices “shall hold their Offices during good Behaviour.”
In the above commentary, Jack Jordan makes an excellent case that the Founders' intentions regarding the tenure of federal justices and judges has been grossly misinterpreted--and by justices who claim to be "originalists." Below are some excerpts:
A favorite falsehood by fake originalists (including those on SCOTUS) is that federal judges have “life tenure” or “lifetime appointments” (essentially the right to employment for life). Nothing explicitly or implicitly in our Constitution supports that myth. Often, so-called originalists who assert such falsehoods are lying to us about our Constitution. [...] Our Constitution (Article III) strongly and clearly emphasized that all federal “Judges,” i.e., “of the supreme [court] and [all] inferior Courts shall” (and may) “hold their Offices” only “during good Behaviour.” This particular principle was discussed repeatedly and in multiple respects during the debates over whether the people should ratify our Constitution. Such discussions are evidence of what the people actually did ratify. Such discussions are evidence of what the people (including Federalists and Antifederalists) understood our Constitution meant. Some of the most obvious and emphatic statements were by Alexander Hamilton in The Federalist No. 78. Hamilton emphasized that some state “constitutions” already “established GOOD BEHAVIOR as the tenure of their judicial offices” and our Constitution “would have been inexcusably defective, if it had [failed to include] this important feature of good government.” “The standard of good behavior for the continuance in office of the judicial magistracy” was carefully (and repeatedly) chosen to be “one of the most valuable of the modern improvements in the practice of government.” [color/ emphasis added]
______________ Alexander Hamilton image was AI generated by Shutterstock.
[See more excerpts below the cut.]
[...] Hamilton also emphasized that judges are “servant[s]” or “representative[s]” of “the people.” We the People used our Constitution (Article III) to impose the “standard of good behavior” on judges as an “excellent barrier to the encroachments and oppressions of [all our] representative[s]” and “to secure a steady, upright, and impartial administration of the laws” by all our public servants. [...] Repeatedly, Hamilton and James Madison emphasized similar principles. Ours is “a republic, where every magistrate ought to be personally responsible for his behavior in office.” The Federalist No. 70 (Hamilton). Having “courts composed of judges holding their offices” only “during good behavior” is a “powerful means” for ensuring “the excellences of republican government may be retained and its imperfections lessened or avoided.” The Federalist No. 9 (Hamilton). “The tenure by which the judges are to hold their places, is, as it unquestionably ought to be, that of good behavior.” The Federalist No. 39 (James Madison). Only “judges” who “behave properly, will be secured in their places for life.” The Federalist No. 79 (Hamilton). In The Federalist No. 81 (Hamilton) also addressed a particular form of bad judicial behavior that is remarkably common among some SCOTUS justices: “judges” committing “deliberate usurpations” of “authority” that was not delegated to them by our Constitution. Hamilton also emphasized “the important constitutional check which the power of instituting impeachments” (by the House of Representatives) “and of determining upon them” (in a trial by the Senate) “would give to” Congress as “the means of punishing [the] presumption” of judges usurping powers that the Constitution did not give judges or courts (or to Congress, which creates all federal courts below SCOTUS). [color/ emphasis added]
So the Founders expected federal judges and justices who were not showing "good behavior" to be removed.
This also suggests that they would have expected the Supreme Court to develop a code of ethics that had actual teeth, in addition to the institutional check against bad judicial behavior that they put in place by allowing Congress to impeach corrupt justices.
Unfortunately, the Founders didn't expect that in the future one party in Congress (the Republicans) would be so corrupt that there is no way they would ever impeach the equally corrupt right-wing "politicians in robes" on the current Supreme Court.
Still, anytime a justice asserts that they have tenure for life in an interview, the interviewer might want to remind them about that "good behavior" stipulation in Article III, and ask them how they are making sure they are fulfilling that requirement for their continued tenure.
#scotus#good behavior#justices don't have tenure for life#article III section 1#the constitution#alexander hamilton#james madison#jack jordan#black-collar crime#my edits
52 notes
·
View notes
Text
Noticing an uptick in cute animal ai videos on here, remember to think twice about the animal videos you reblog. Especially if they're 1. From a 'cute animal' blog (which already tend to post harmful animal content for the sake of perceived cuteness) 2. Of low quality (ai doesn't seem to be able to generate detailed videos yet) 3. Has the animal(s) performing odd actions.
The two examples I saw were boars eating apples, and one of them was climbing on.. the tree? And the other was of a cat sitting with a nest of owls, which is not only incredibly out of character for both animals to do, but promotes the idea that wild birds and cats can interact w/eachother and that is SAFE and CUTE! when it REALLY ISNT!
Honestly, I wouldn't be as upset about ai animal videos if it weren't for the fact that this is absolutely going to be abused to create and spread misconceptions of the animals it's capturing. Don't tolerate this shit
22 notes
·
View notes
Text
When it comes to issues with "AI" for writing (by which I mean essentially pressing a "write" button and having it do it all for you), I think a lot of people miss them because they don't make the connection between AI and plagiarism.
I've described the main way to avoid plagiarizing as knowing a subject well enough to put it into your own words.
One of the most common ways people try to hide plagiarism is to directly quote something without credit, but change words here and there. It can be comically transparent when compared side-by-side.
What we're calling "AI" literally cannot know or understand a subject. It's taking whatever its saying from somewhere, and changing words around depending on prompts.
The person giving the prompt is essentially outsourcing the plagiarism to an automated system.
What's more, since the AI literally can't understand the subject, it will potentially repeat misconceptions, mistakes, or rephrase things in ways that simply make them wrong.
It can't know it's making a mistake. It can only "know" that what's being written is within an acceptable threshold of what words a person might string together given its data.
Given that the person is using AI instead of just writing it themselves, it's also probable that the person's knowledge is lacking, and won't be able to recognize or correct these mistakes.
It's a mess of plagiarism and potential misinformation, is what I'm saying, which is particularly bad if someone is "writing" non-fiction.
I don't wish harm on people, but I do hope people face consequences for selling non-fiction misinformation generated by AI. It's genuinely dangerous.
Like, if you're going to abuse AI, at least stick to fiction. We can disagree on publishing such things for profit, but at least it's not giving someone dangerously incorrect electrician instructions or something.
96 notes
·
View notes
Text
Stop believing these 10 Generative AI myths! Get accurate insights and separate fact from fiction in the evolving world of artificial intelligence.
#AI Misconceptions#AI Myths Busted#AI Myths Debunked#Concept of Generative AI#Generative AI#Generative AI Misconceptions#Generative AI Myths#Misconceptions About AI
0 notes
Text
Stop believing these 10 Generative AI myths! Get accurate insights and separate fact from fiction in the evolving world of artificial intelligence.
#AI Misconceptions#AI Myths Busted#AI Myths Debunked#Concept of Generative AI#Generative AI#Generative AI Misconceptions#Generative AI Myths#Misconceptions About AI
0 notes
Note
"alien fascists" and you're over here thinking about the Combine from Half-Life or the Empire from Star Wars, but really they mean multicolored space gems
It's because calling them "fascists" conveys a very particular mental image, but I think "alien" is the more important part of the equation.
The Diamonds do not think like human beings. Gems, in general, do not think like human beings. They are basically like AIs. Rose proves that they are capable of changing, of course, but it's hard for them, because overall Gem culture is rigid and hierarchical, not to mention immortal which puts other organisms' life into a different perspective.
The Diamonds are at their core a colonizing species. Not because they're so evil: it's literally how they reproduce. Their survival depends on stealing resources from other planets: and sure, they are immortal which means they don't die at a fast rate, but Gems can be shattered for any reason. Of course, a byproduct of there being many Gems, and the culture assigning a certain purpose to every Gem, makes them generally expendable, much like insects: so yes, sometimes, Gems are shattered by the Diamonds themselves if they are deemed unfitting or defective.
It's a harsh culture, inherently dehumanized, because they are not humans. They are drones. The Diamonds are the queen bees making sure the machine is oiled enough for their society to work. And it works... functionally. Emotionally, that's another matter - and this is where Steven intervenes, because he sees Gems as functionally people, and he has seen how much their grief for Pink Diamond has wracked them.
I think we should remember the Blue and Orange morality trope. Gems don't think like humans, unless they're given a chance to change their mentality. And change is pretty much the core theme of the show: everyone can change, if they want to and if they put effort into it. Hell, that's the very reason of Peridot's character!
This has been mostly a ramble lol. My point is, while the Diamonds are certainly villainous from our point of view, and they have committed cruel acts even in the context of their culture (the Cluster), calling them "fascists" and even worse spreading the misconception that Steven "forgave Nazis" is a gross simplification. And, as other people have said, the show has never promoted violent justice and punishment in the first place, so I don't see how could anyone genuinely expect that Steven would pull out the Breaking Point to kill his abusive grandma.
50 notes
·
View notes
Text
⏳✨ Welcome to Lin’s Realm!✨⏳
Find me on: Twitch | Insta | Twitter | YT | AO3
Stuff I’m proud of:
Wisdomverse: (Masterpost!) A duo of Zelda AUs, containing:
Wielders of Wisdom: a Zeldas-meet AU comic (start here), and
The Secrets We Keep: an LU tale of mysteries and misconceptions (start here)!
#lin draws (<- My art! Though I work in generative AI, all my art is drawn by hand and completely my own!)
#lin thinks (<- Theories and analyses galore :)
Redbubble (Stickers and other stuff!!)
Zelda: ALttP + Palace of the Four Sword Playthrough (vids/vods)
Triangle Strategy Challenge Runs (NG Hard Mode Golden | NG+ Hard Mode Deathless)
FE:Engage + Xenologue Run (Maddening Mode Blind)
Currently streaming: Art (Mondays) | Triangle Strategy HM Random Army (Tues/Sun) | Echoes of Wisdom (Wed/Fri)
If you like my stuff, consider leaving me a tip!
I am also open for questions and suggestions!
Lastly, thanks for all the support, folks! Y’all are great <3.
Have a wonderful day!
#finally got around to making one of these!#lin draws#lin thinks#introductory post#artist#small streamer#vtuber#lin responds#lin writes#wielders of wisdom#lu the secrets we keep
170 notes
·
View notes
Text
If I might make a submission to the "good vs. bad ways of using and/or thinking about AI" discussion:
AI quickly providing alt text or captions when it would be impractical for a human to get it done in the required time frame is good. Example: live subtitles on a stream. (Yes, that's basically AI. You train voice recognition programs on samples in the target language.)
However: AI being used as an excuse to not put care, effort, and resources into accessibility — when it would be feasible, as it usually is, for a human to write the image description or subtitles — is bad, and specifically, it's a deeply ableist misconception. Example: high-production value YouTube videos where the only closed captioning is inexplicably still "auto-generated".
As someone with audio processing issues, I'm glad that generated captions are at least occasionally useful, and as someone who writes a lot of image descriptions, I'm glad that technology can help extract text to save me some typing. But please, take it from me and my experience: no such tools I've encountered can consistently get near the quality of anything written by a person in all cases and contexts in which captions, IDs, etc are needed.
You still need to care about accessibility. Especially now that there are tools making it easier for you to do so — you just can't assume that they'll do the whole job for you.
42 notes
·
View notes