#trolls and humans may look alike but their bodies are different in many specific ways
Explore tagged Tumblr posts
nana2009 ¡ 9 months ago
Text
hyea :P although i dont think that would happen for a few thousand years due to fear and reluctance of mixing different species dna until someone *side eye* finally cast the first stone so the process could finally start being improved and more widespread
tfw no internet, davekat
Tumblr media
genetically inherited cape thiefs
Tumblr media
pov you were caught escaping your room while grounded
612 notes ¡ View notes
monstersdownthepath ¡ 27 days ago
Text
Homebrew Horror: Unwanted Amalgam
Tumblr media
(Art source, and probably one of the better fanart pieces of this critter I've seen!)
Remember to eat your veggies, don't just scoot them around on your plate.
Malevolent boogeymen known by many names in many different cultures across the Inner Sea, Unwanted Amalgams are twisted aberrations formed from wasted and (as their name suggests) unwanted foods. Believed to be spirits embodying wasteful gluttony, Amalgams don't just rise out of plates of uneaten broccoli at a child's table or leftovers which go sour after they're forgotten, but instead arise from truly egregious displays of carelessness; whole banquets squandered when a noble grows full after a single dish and orders the rest disposed of, countless "ugly" pieces of produce thrown away by farmers and citizens alike for no reason but appearance, or truly impressive amounts of food being hoarded away from hungry mouths and left to rot.
Amalgams weave their bodies from this unwanted food, knitting countless pieces of edibles together into humanoid shapes whose general appearance depends on the most common form of food within them: an Amalgam constructed mostly of wasted meat may resemble a towering troll or ogre, while a primarily produce or grain-based one may look more like an especially tall and willowy elf. The one displayed above is one of the rarest types, constructed of wasted sweets and confections, and ambulates more like an insect than a human, with limb proportions to match. Regardless of their general shape, the ways they move and carry the weight of their amorphous bodies prevents anyone from mistaking them for a human in all but complete darkness.
Because they arise from food waste, Amalgams are a more recent boogeyman, ones that have begun to haunt the modernizing world as food production begins to exceed its demand, stalking through urban areas where people can afford to waste food as, in their minds, more will always be available. Due to their recent appearance, they are poorly studied and poorly understood, and have little desire to talk specifics about their motivations, origins, or desires beyond the immediately obvious... though they ARE quite talkative. To the point many wish they would stop.
The Amalgams possess a twisted sense of justice which the vast majority of them are incredibly vocal about, launching into soliloquy at the slightest prompting or provocation. Though they are all born from an incident of incredible magnitude, they are motivated to punish any act of wastefulness or gluttony they observe, no matter how small. Everything from a restaurant throwing away hundreds of pounds of perfectly edible food down to a child refusing to eat their vegetables may incite the wrath of an observing Amalgam, who will confront these unfortunates and command them to perform some task for it to spare them a terrible fate. These tasks are set by the whims of the Amalgam and run the gamut from the mercifully ordinary (finish your meal) to the nonsensical (gather 100 red objects and place them in a circle in one's front yard) to the impossible (slay a monster with an inadequate weapon), but failure to complete them within an arbitrary time limit sees the victim pummeled into helplessness by the horror and, in a cruel reversal of fate, consumed by it.
Attempting to fight for one's life against an Amalgam is no easy task. Their aberrant physiology renders them impervious to many reliable tricks, and whatever strange forces animate their bodies also knits them back together with frightening speed (to the point of returning them from death), though the ever-reliable fire and acid damage can destroy them beyond their ability to regenerate. Magic which affects only plantlife also affects Amalgams, even if they aren't entirely made of plant matter, and of course all Amalgams subconsciously desire to be eaten, rendering them extremely vulnerable to any hungry beast that attempts to take a bite out of them.
Unwanted Amalgam CR 6
Neutral Evil Large Aberration Init: +3; Senses: Darkvision 60ft; Perception +13 Aura: Frightful Presence (30ft, DC 15, 2d6 rounds)
------ Defense ------
AC 18, touch 12, flat-footed 15 (+3 Dex, +6 natural armor, -1 size) HP 48 (8d8+14), Regeneration 3 (Acid, Fire, bite attacks) Fort +6 Ref +5 Will +8 Defensive abilities Pull Together; DR 4/--; Immune critical hits, precision damage Weaknesses Vulnerable to Putrefaction, Yearn for Purpose
------ Offense ------
Speed 40ft, climb 40ft Melee Bite +10 (1d8+5 plus Grab), 2 slams +8 (1d6+3 plus Grab) Space 10ft; Reach 10ft Special Attacks Many-Armed Grapple, Swallow Whole (1d10 bludgeoning, AC 13, HP 5) Spell-like Abilities (CL 8; Concentration +9)
Constant--Spider Climb At-will--Dancing Lights, Ghost Sound, Prestidigitation 1/day--Dimension Door
------ Statistics ------
Str 20 Dex 16 Con 17 Int 14 Wis 15 Cha 13 Base Atk: +6; CMB +11; CMD 25
Feats Combat Reflexes, Great Fortitude, Intimidating Prowess, Multiattack
Skills Acrobatics +18, Climb +22, Intimidate +21, Knowledge (Local) +11, Perception +13, Stealth +11, Survival +13 Racial modifiers: +8 to Acrobatics, +4 to Intimidate
Languages Aklo, Common, any one local language
SQ Compression
------ Ecology ------
Environment any urban Organization solitary Treasure standard (rations, pilfered items)
------
Combat: Before battle, Unwanted Amalgams will clamber out of reach and repeatedly intimidate creatures to weaken them before leaping in. It will also use its surprise round to intimidate the enemy it wishes to punish most, if possible. Amalgams are simple creatures in a fight: They attack with their slams and attempt to grapple as many creatures as possible, swallowing the smallest among them while beating the rest to unconsciousness or death.
Morale: Amalgams are fierce fighters which pursue their prey relentlessly; they always fight to the death, though their supernatural resilience prevents some deaths from being the end of them.
------ Special Abilities ------
Many-Armed Grapple (Ex): Amalgams can produce upwards to six additional limbs as a free action to maintain grapples against an equal number of Medium or smaller creatures, allowing them to grapple multiple creatures at once while still being able to make two slam attacks. When not grappling a creature, these excess limbs are instantly re-absorbed.
Pull Together (Ex): An Amalgam's severed portions remain animate when they're severed, crawling back towards the whole at a rate of 10ft a round at the end of the Amalgam's turn. Each round the Amalgam ends adjacent to a severed piece of itself, it absorbs the piece (regenerating the severed portion instantly) and regains 1 HP. A severed piece can be destroyed with at least 1 point of Fire or Acid damage, or damage done by a bite attack. In addition, an Amalgam that is slain will return to life 1d4 hours later at 0 HP unless its remains are burned, doused in acid, or consumed by one or more other creatures.
Swallow Whole (Ex): An Amalgam can swallow Small or smaller creatures grappled by its claws without needing to transfer them to its mouth first; if it succeeds the check to pin the creature, it simply raises the creature over its head and drops them into its waiting maw. When a creature cuts its way out, the hole instantly closes behind that creature.
Vulnerable to Putrefaction (Ex): Regardless of their composition, Amalgams are treated as both Aberrations and Plants for the purposes of harmful spells (such as Blight) and abilities (such as Favored Enemy). A Putrefy Food and Drink spell cast on an Amalgam deals 2d8 Acid damage to it, and if that spell is cast on its remains, its body is destroyed utterly and it cannot return to life (see Pull Together, above). Inversely, a Purify Food and Drink cast on an Amalgam restores 2d8 HP to it and grants it the benefits of Haste for 1 round.
Yearn for Purpose (Ex): All Amalgams subconsciously desire the destiny of all food: to be eaten. Bite attacks made against them resolve as touch attacks, and damage from bites both bypasses their Damage Reduction and suppresses their regeneration for 1 round.
78 notes ¡ View notes
neptunecreek ¡ 6 years ago
Text
Content Moderation is Broken. Let Us Count the Ways.
Social media platforms regularly engage in “content moderation”—the depublication, downranking, and sometimes outright censorship of information and/or user accounts from social media and other digital platforms, usually based on an alleged violation of a platform’s “community standards” policy. In recent years, this practice has become a matter of intense public interest. Not coincidentally, thanks to growing pressure from governments and some segments of the public to restrict various types of speech, it has also become more pervasive and aggressive, as companies struggle to self-regulate in the hope of avoiding legal mandates.
Many of us view content moderation as a given, an integral component of modern social media. But the specific contours of the system were hardly foregone conclusions. In the early days of social media, decisions about what to allow and what not to were often made by small teams or even individuals, and often on the fly. And those decisions continue to shape our social media experience today.
Roz Bowden—who spoke about her experience at UCLA’s All Things in Moderation conference in 2017—ran the graveyard shift at MySpace from 2005 to 2008, training content moderators and devising rules as they went along. Last year, Bowden told the BBC:
We had to come up with the rules. Watching porn and asking whether wearing a tiny spaghetti-strap bikini was nudity? Asking how much sex is too much sex for MySpace? Making up the rules as we went along. Should we allow someone to cut someone's head off in a video? No, but what if it is a cartoon? Is it OK for Tom and Jerry to do it?
Similarly, in the early days of Google, then-deputy general counsel Nicole Wong was internally known as “The Decider” as a result of the tough calls she and her team had to make about controversial speech and other expression. In a 2008 New York Times profile of Wong and Google’s policy team, Jeffrey Rosen wrote that as a result of Google’s market share and moderation model, “Wong and her colleagues arguably have more influence over the contours of online expression than anyone else on the planet.”
Built piecemeal over the years by a number of different actors passing through Silicon Valley’s revolving doors, content moderation was never meant to operate at the scale of billions of users. The engineers who designed the platforms we use on a daily basis failed to imagine that one day they would be used by activists to spread word of an uprising...or by state actors to call for genocide. And as pressure from lawmakers and the public to restrict various types of speech—from terrorism to fake news—grows, companies are desperately looking for ways to moderate content at scale.
They won’t succeed—at least if they care about protecting online expression even half as much as they care about their bottom line.
The Content Moderation System Is Fundamentally Broken. Let Us Count the Ways:
1. Content Moderation Is a Dangerous Job—But We Can’t Look to Robots to Do It Instead
As a practice, content moderation relies on people in far-flung (and almost always economically less well-off) locales to cleanse our online spaces of the worst that humanity has to offer so that we don’t have to see it. Most major platforms outsourcing the work to companies abroad, where some workers are reportedly paid as little as $6 a day and others report traumatic working conditions. Over the past few years, researchers such as EFF Pioneer Award winner Sarah T. Roberts have exposed just how harmful a job it can be to workers.
Companies have also tried replacing human moderators with AI, thereby solving at least one problem (the psychological impact that comes from viewing gory images all day), but potentially replacing it with another: an even more secretive process in which false positives may never see the light of day.
2. Content Moderation Is Inconsistent and Confusing
For starters, let’s talk about resources. Companies like Facebook and YouTube expend significant resources on content moderation, employing thousands of workers and utilizing sophisticated automation tools to flag or remove undesirable content. But one thing is abundantly clear: The resources allocated to content moderation aren’t distributed evenly. Policing copyright is a top priority, and because automation can detect nipples better than it can recognize hate speech, users often complain that more attention is given to policing women’s bodies than to speech that might actually be harmful.
But the system of moderation is also inherently inconsistent. Because it relies largely on community policing—that is, on people reporting other people for real or perceived violations of community standards—some users are bound to be more heavily impacted than others. A person with a public profile and a lot of followers is mathematically more likely to be reported than a less popular user. And when a public figure is removed by one company, it can create a domino effect whereby other companies follow their lead.
Problematically, companies’ community standards also often feature exceptions for public figures: That’s why the president of the United States can tweet hateful things with impunity, but an ordinary user can’t. While there’s some sense to such policies—people should know what their politicians are saying—certain speech obviously carries more weight when spoken by someone in a position of authority.
Finally, when public pressure forces companies to react quickly to new “threats,” they tend to overreact. For example, after the passing of FOSTA—a law purportedly designed to stop sex trafficking but which, as a result of sweepingly broad language, has resulted in confusion and overbroad censorship by companies—Facebook implemented a policy on sexual solicitation that was essentially a honeypot for trolls. In responding to ongoing violence in Myanmar, the company created an internal manual that contained elements of misinformation. And it’s clear that some actors have greater ability to influence companies than others: A call from Congress or the European Parliament carries a lot more weight in Silicon Valley than one that originates from a country in Africa or Asia. By reacting to the media, governments, or other powerful actors, companies reinforce the power that such groups already have.
3. Content Moderation Decisions Can Cause Real-World Harms to Users as Well as Workers
Companies’ attempts to moderate what they deem undesirable content has all too often had a disproportionate effect on already-marginalized groups. Take, for example, the attempt by companies to eradicate homophobic and transphobic speech. While that sounds like a worthy goal, these policies have resulted in LGBTQ users being censored for engaging in counterspeech or for using reclaimed terms like “dyke”. 
Similarly, Facebook’s efforts to remove hate speech have impacted individuals who have tried to use the platform to call out racism by sharing the content of hateful messages they’ve received. As an article in the Washington Post explained, “Compounding their pain, Facebook will often go from censoring posts to locking users out of their accounts for 24 hours or more, without explanation — a punishment known among activists as ‘Facebook jail.’”
Content moderation can also pose harms to business. Small and large businesses alike increasingly rely on social media advertising, but strict content rules disproportionately impact certain types of businesses. Facebook bans ads that it deems “overly suggestive or sexually provocative”, a practice that has had a chilling effect on women’s health startups, bra companies, a book whose title contains the word “uterus”, and even the National Campaign to Prevent Teen and Unwanted Pregnancy.
4. Appeals Are Broken, and Transparency Is Minimal
For many years, users who wished to appeal a moderation decision had no feasible path for doing so...unless of course they had access to someone at a company. As a result, public figures and others with access to digital rights groups or the media were able to get their content reinstated, while others were left in the dark.
In recent years, some companies have made great strides in improving due process: Facebook, for example, expanded its appeals process last year. Still, users of various platforms complain that appeals lack result or go unanswered, and the introduction of more subtle enforcement mechanisms by some companies has meant that some moderation decisions are without a means of appeal.
Last year, we joined several organizations and academics in creating the Santa Clara Principles on Transparency and Accountability in Content Moderation, a set of minimum standards that companies should implement to ensure that their users have access to due process and receive notification when their content is restricted, and to provide transparency to the public about what expression is being restricted and how.
In the current system of content moderation, these are necessary measures that every company must take. But they are just a start.  
No More Magical Thinking
We shouldn’t look to Silicon Valley, or anyone else, to be international speech police for practical as much as political reasons. Content moderation is extremely difficult to get right, and at the scale at which some companies are operating, it may be impossible. As with any system of censorship, mistakes are inevitable.  As companies increasingly use artificial intelligence to flag or moderate content—another form of harm reduction, as it protects workers—we’re inevitably going to see more errors. And although the ability to appeal is an important measure of harm reduction, it’s not an adequate remedy.
Advocates, companies, policymakers, and users have a choice: try to prop up and reinforce a broken system—or remake it. If we choose the latter, which we should, here are some preliminary recommendations:
Censorship must be rare and well-justified, particularly by tech giants. At a minimum, that means (1) Before banning a category of speech, policymakers and companies must explain what makes that category so exceptional, and the rules to define its boundaries must be clear and predictable. Any restrictions on speech should be both necessary and proportionate. Emergency takedowns, such as those that followed the recent attack in New Zealand, must be well-defined and reserved for true emergencies. And (2) when content is flagged as violating community standards, absent exigent circumstances companies must notify the user and give them an opportunity to appeal before the content is taken down. If they choose to appeal, the content should stay up until the question is resolved. But (3) smaller platforms dedicated to serving specific communities may want to take a more aggressive approach. That’s fine, as long as Internet users have a range of meaningful options with which to engage.
Consistency. Companies should align their policies with human rights norms. In a paper published last year, David Kaye—the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression—recommends that companies adopt policies that allow users to “develop opinions, express themselves freely and access information of all kinds in a manner consistent with human rights law.” We agree, and we’re joined in that opinion by a growing coalition of civil liberties and human rights organizations.
Tools. Not everyone will be happy with every type of content, so users should be provided with more individualized tools to have control over what they see. For example, rather than banning consensual adult nudity outright, a platform could allow users to turn on or off the option to see it in their settings. Users could also have the option to share their settings with their community to apply to their own feeds.
Evidence-based policymaking. Policymakers should tread carefully when operating without facts, and not fall victim to political pressure. For example, while we know that disinformation spreads rapidly on social media, many of the policies created by companies in the wake of pressure appear to have had little effect. Companies should work with researchers and experts to respond more appropriately to issues.
Recognizing that something needs to be done is easy. Looking to AI to help do that thing is also easy. Actually doing content moderation well is very, very difficult, and you should be suspicious of any claim to the contrary.
from Deeplinks http://bit.ly/2UN1cap
0 notes