#internal data distribution
Explore tagged Tumblr posts
Photo
Smart Energy Finance: Funding for autonomous EV charging and GridBeyondâs acquisition of Veritone Energy Leading Smart Energy Finances: a successful Series A funding round for Rocsys, which has been developing an autonomous EV charging solution https://www.smart-energy.com/finance-investment/smart-energy-finance-funding-for-autonomous-ev-charging-and-gridbeyonds-acquisition-of-veritone-energy/
#Business#Data &Analytics#Distributed generation#Electric Vehicles#Energy &Grid Management#Europe &UK#Features and Analysis#Finance &Investment#New technology#North America#Smart Energy#acquisition#EV charging#France#Smart Energy Finances Weekly#VPP#Yusuf Latief#Smart Energy International
0 notes
Text
Microsoft made Recallâthe feature that automatically tracks everything you do in an attempt at helping you except, you know, that's a massive security risk and data mining sourceâa dependency for the windows file explorer, meaning even if you forcibly strip Recall out you end up losing basic tools.
This is very much a "learn how to install Linux Mint on your laptop" moment. Richard Stallman et al were entirely correct, your computer will soon have spyware integrated deep into the system internals with no ability to cleanly remove it even for experienced, tech savvy users.
Yes, it sucks, there is no Linux distribution that has to even close to the level of support for software and peripherals that windows has, and even the easier distros like Mint still expect a level of tech savvy that Mac and Windows just don't require. Anyone telling you that Linux is just as easy and just as good is lying to you.
But Linux has never been easier, has never been as well supported as today, and simply doesn't contain egregious spyware (well, besides Ubuntu that one time I guess).
2K notes
¡
View notes
Text
Generative AI Policy (February 9, 2024)
As of February 9, 2024, we are updating our Terms of Service to prohibit the following content:
Images created through the use of generative AI programs such as Stable Diffusion, Midjourney, and Dall-E.
This post explains what that means for you. We know itâs impossible to remove all images created by Generative AI on Pillowfort. The goal of this new policy, however, is to send a clear message that we are against the normalization of commercializing and distributing images created by Generative AI. Pillowfort stands in full support of all creatives who make Pillowfort their home. Disclaimer: The following policy was shaped in collaboration with Pillowfort Staff and international university researchers. We are aware that Artificial Intelligence is a rapidly evolving environment. This policy may require revisions in the future to adapt to the changing landscape of Generative AI.Â
-
Why is Generative AI Banned on Pillowfort?
Our Terms of Service already prohibits copyright violations, which includes reposting other peopleâs artwork to Pillowfort without the artistâs permission; and because of how Generative AI draws on a database of images and text that were taken without consent from artists or writers, all Generative AI content can be considered in violation of this rule. We also had an overwhelming response from our user base urging us to take action on prohibiting Generative AI on our platform. Â
-
How does Pillowfort define Generative AI?
As of February 9, 2024 we define Generative AI as online tools for producing material based on large data collection that is often gathered without consent or notification from the original creators.
Generative AI tools do not require skill on behalf of the user and effectively replace them in the creative process (ie - little direction or decision making taken directly from the user). Tools that assist creativity don't replace the user. This means the user can still improve their skills and refine over time.Â
For example: If you ask a Generative AI tool to add a lighthouse to an image, the image of a lighthouse appears in a completed state. Whereas if you used an assistive drawing tool to add a lighthouse to an image, the user decides the tools used to contribute to the creation process and how to apply them.Â
Examples of Tools Not Allowed on Pillowfort: Adobe Firefly* Dall-E GPT-4 Jasper Chat Lensa Midjourney Stable Diffusion Synthesia
Example of Tools Still Allowed on Pillowfort:Â
AI Assistant Tools (ie: Google Translate, Grammarly) VTuber Tools (ie: Live3D, Restream, VRChat) Digital Audio Editors (ie: Audacity, Garage Band) Poser & Reference Tools (ie: Poser, Blender) Graphic & Image Editors (ie: Canva, Adobe Photoshop*, Procreate, Medibang, automatic filters from phone cameras)
*While Adobe software such as Adobe Photoshop is not considered Generative AI, Adobe Firefly is fully integrated in various Adobe software and falls under our definition of Generative AI. The use of Adobe Photoshop is allowed on Pillowfort. The creation of an image in Adobe Photoshop using Adobe Firefly would be prohibited on Pillowfort.Â
-
Can I use ethical generators?Â
Due to the evolving nature of Generative AI, ethical generators are not an exception.
-
Can I still talk about AI?Â
Yes! Posts, Comments, and User Communities discussing AI are still allowed on Pillowfort.
-
Can I link to or embed websites, articles, or social media posts containing Generative AI?Â
Yes. We do ask that you properly tag your post as âAIâ and âArtificial Intelligence.â
-
Can I advertise the sale of digital or virtual goods containing Generative AI?
No. Offsite Advertising of the sale of goods (digital and physical) containing Generative AI on Pillowfort is prohibited.
-
How can I tell if a software I use contains Generative AI?
A general rule of thumb as a first step is you can try testing the software by turning off internet access and seeing if the tool still works. If the software says it needs to be online thereâs a chance itâs using Generative AI and needs to be explored further.Â
You are also always welcome to contact us at [email protected] if youâre still unsure.
-
How will this policy be enforced/detected?
Our Team has decided we are NOT using AI-based automated detection tools due to how often they provide false positives and other issues. We are applying a suite of methods sourced from international universities responding to moderating material potentially sourced from Generative AI instead.
-
How do I report content containing Generative AI Material?
If you are concerned about post(s) featuring Generative AI material, please flag the post for our Site Moderation Team to conduct a thorough investigation. As a reminder, Pillowfortâs existing policy regarding callout posts applies here and harassment / brigading / etc will not be tolerated.Â
Any questions or clarifications regarding our Generative AI Policy can be sent to [email protected].
2K notes
¡
View notes
Text
Indian Prime Minister Narendra Modi is, by some measures, the most popular leader in the world. Prior to the 2024 election, his Bharatiya Janata Party (BJP) held an outright majority in the Lok Sabha (Indiaâs Parliament) â one that was widely projected to grow after the vote count. The party regularly boasted that it would win 400 Lok Sabha seats, easily enough to amend Indiaâs constitution along the party's preferred Hindu nationalist lines.
But when the results were announced on Tuesday, the BJP held just 240 seats. They not only underperformed expectations, they actually lost their parliamentary majority. While Modi will remain prime minister, he will do so at the helm of a coalition government â meaning that he will depend on other parties to stay in office, making it harder to continue his ongoing assault on Indian democracy.
So what happened? Why did Indian voters deal a devastating blow to a prime minister who, by all measures, they mostly seem to like?
India is a massive country â the most populous in the world â and one of the most diverse, making its internal politics exceedingly complicated. A definitive assessment of the election would require granular data on voter breakdown across caste, class, linguistic, religious, age, and gender divides. At present, those numbers donât exist in sufficient detail.Â
But after looking at the information that is available and speaking with several leading experts on Indian politics, there are at least three conclusions that Iâm comfortable drawing.
First, voters punished Modi for putting his Hindu nationalist agenda ahead of fixing Indiaâs unequal economy. Second, Indian voters had some real concerns about the decline of liberal democracy under BJP rule. Third, the opposition parties waged a smart campaign that took advantage of Modiâs vulnerabilities on the economy and democracy.
Understanding these factors isnât just important for Indians. The countryâs election has some universal lessons for how to beat a would-be authoritarian â ones that Americans especially might want to heed heading into its election in November.
-via Vox, June 7, 2024. Article continues below.
A new (and unequal) economy
Modiâs biggest and most surprising losses came in Indiaâs two most populous states: Uttar Pradesh in the north and Maharashtra in the west. Both states had previously been BJP strongholds â places where the partyâs core tactic of pitting the Hindu majority against the Muslim minority had seemingly cemented Hindu support for Modi and his allies.
One prominent Indian analyst, Yogendra Yadav, saw the cracks in advance. Swimming against the tide of Indian media, he correctly predicted that the BJP would fall short of a governing majority.
Traveling through the country, but especially rural Uttar Pradesh, he prophesied âthe return of normal politicsâ: that Indian voters were no longer held spellbound by Modiâs charismatic nationalist appeals and were instead starting to worry about the way politics was affecting their lives.
Yadavâs conclusions derived in no small part from hearing votersâ concerns about the economy. The issue wasnât GDP growth â Indiaâs is the fastest-growing economy in the world â but rather the distribution of growthâs fruits. While some of Modiâs top allies struck it rich, many ordinary Indians suffered. Nearly half of all Indians between 20 and 24 are unemployed; Indian farmers have repeatedly protested Modi policies that they felt hurt their livelihoods.
âEveryone was talking about price rise, unemployment, the state of public services, the plight of farmers, [and] the struggles of labor,â Yadav wrote...
âWe know for sure that Modiâs strongman image and brassy self-confidence were not as popular with voters as the BJP assumed,â says Sadanand Dhume, a senior fellow at the American Enterprise Institute who studies India.Â
The lesson here isnât that the pocketbook concerns trump identity-based appeals everywhere; recent evidence in wealthier democracies suggests the opposite is true. Rather, itâs that even entrenched reputations of populist leaders are not unshakeable. When they make errors, even some time ago, itâs possible to get voters to remember these mistakes and prioritize them over whatever culture war the populist is peddling at the moment.
Liberalism strikes back
The Indian constitution is a liberal document: It guarantees equality of all citizens and enshrines measures designed to enshrine said equality into law. The signature goal of Modiâs time in power has been to rip this liberal edifice down and replace it with a Hindu nationalist model that pushes non-Hindus to the social margins. In pursuit of this agenda, the BJP has concentrated power in Modiâs hands and undermined key pillars of Indian democracy (like a free press and independent judiciary).
Prior to the election, there was a sense that Indian voters either didnât much care about the assault on liberal democracy or mostly agreed with it. But the BJPâs surprising underperformance suggests otherwise.
The Hindu, a leading Indian newspaper, published an essential post-election data analysis breaking down what we know about the results. One of the more striking findings is that the opposition parties surged in parliamentary seats reserved for members of âscheduled castesâ â the legal term for Dalits, the lowest caste grouping in the Hindu hierarchy.
Caste has long been an essential cleavage in Indian politics, with Dalits typically favoring the left-wing Congress party over the BJP (long seen as an upper-caste party). Under Modi, the BJP had seemingly tamped down on the salience of class by elevating all Hindus â including Dalits â over Muslims. Yet now itâs looking like Dalits were flocking back to Congress and its allies. Why?
According to experts, Dalit voters feared the consequences of a BJP landslide. If Modiâs party achieved its 400-seat target, theyâd have more than enough votes to amend Indiaâs constitution. Since the constitution contains several protections designed to promote Dalit equality â including a first-in-the-world affirmative action system â that seemed like a serious threat to the community. It seems, at least based on preliminary data, that they voted accordingly.
The Dalit vote is but one example of the ways in which Modiâs brazen willingness to assail Indian institutions likely alienated voters.
Uttar Pradesh (UP), Indiaâs largest and most electorally important state, was the site of a major BJP anti-Muslim campaign. It unofficially kicked off its campaign in the UP city of Ayodhya earlier this year, during a ceremony celebrating one of Modiâs crowning achievements: the construction of a Hindu temple on the site of a former mosque that had been torn down by Hindu nationalists in 1992.Â
Yet not only did the BJP lose UP, it specifically lost the constituency â the city of Faizabad â in which the Ayodhya temple is located. Itâs as direct an electoral rebuke to BJP ideology as one can imagine.
In Maharashtra, the second largest state, the BJP made a tactical alliance with a local politician, Ajit Pawar, facing serious corruption charges. Voters seemingly punished Modiâs party for turning a blind eye to Pawarâs offenses against the public trust. Across the country, Muslim voters turned out for the opposition to defend their rights against Modiâs attacks.
The global lesson here is clear: Even popular authoritarians can overreach.
By turning â400 seatsâ into a campaign slogan, an all-but-open signal that he intended to remake the Indian state in his illiberal image, Modi practically rang an alarm bell for constituencies worried about the consequences. So they turned out to stop him en masse.
The BJPâs electoral underperformance is, in no small part, the direct result of their leaderâs zealotry going too far.
Return of the Gandhis?Â
Of course, Modiâs mistakes might not have mattered had his rivals failed to capitalize. The Indian opposition, however, was far more effective than most observers anticipated.
Perhaps most importantly, the many opposition parties coordinated with each other. Forming a united bloc called INDIA (Indian National Developmental Inclusive Alliance), they worked to make sure they werenât stealing votes from each other in critical constituencies, positioning INDIA coalition candidates to win straight fights against BJP rivals.
The leading party in the opposition bloc â Congress â was also more put together than people thought. Its most prominent leader, Rahul Gandhi, was widely dismissed as a dilettante nepo baby: a pale imitation of his father Rajiv and grandmother Indira, both former Congress prime ministers. Now his critics are rethinking things.
âI owe Rahul Gandhi an apology because I seriously underestimated him,â says Manjari Miller, a senior fellow at the Council on Foreign Relations.
Miller singled out Gandhiâs yatras (marches) across India as a particularly canny tactic. These physically grueling voyages across the length and breadth of India showed that he wasnât just a privileged son of Indian political royalty, but a politician willing to take risks and meet ordinary Indians where they were. During the yatras, he would meet directly with voters from marginalized groups and rail against Modiâs politics of hate.
âThe persona heâs developed â as somebody kind, caring, inclusive, [and] resolute in the face of bullying â has really worked and captured the imagination of younger India,â says Suryanarayan. âIf youâve spent any time on Instagram Reels, [youâll see] an entire generation now waking up to Rahul Gandhiâs very appealing videos.â
This, too, has a lesson for the rest of the world: Tactical innovation from the opposition matters even in an unfair electoral context.
There is no doubt that, in the past 10 years, the BJP stacked the political deck against its opponents. They consolidated control over large chunks of the national media, changed campaign finance law to favor themselves, suborned the famously independent Indian Electoral Commission, and even intimidated the Supreme Court into letting them get away with it.Â
The opposition, though, managed to find ways to compete even under unfair circumstances. Strategic coordination between them helped consolidate resources and ameliorate the BJP cash advantage. Direct voter outreach like the yatra helped circumvent BJP dominance in the national media.
To be clear, the opposition still did not win a majority. Modi will have a third term in office, likely thanks in large part to the ways he rigged the system in his favor.
Yet there is no doubt that the opposition deserves to celebrate. Modiâs power has been constrained and the myth of his invincibility wounded, perhaps mortally. Indian voters, like those in Brazil and Poland before them, have dealt a major blow to their homegrown authoritarian faction.
And that is something worth celebrating.
-via Vox, June 7, 2024.
#india#narendra modi#pm modi#modi#bjp#lok sabha elections#rahul gandhi#democracy#2024 elections#authoritarianism#anti authoritarian#good news#hope
742 notes
¡
View notes
Text
Letâs put some numbers to Jewish fear right now.
In news that Iâm sure will thrill all antisemites, it would take startlingly little effort to foment widespread violence against us and cause another genocide of the Jewish people.
I have had many fellow Jews express to me how overwhelming it is to see the rising antisemitism. I have seen many Jews express fear at being drowned out of public, online, and IRL spaces due to dangerously violent vitriol.
I have also seen people who claim to advocate for Palestineâespecially western leftistsâopenly mock Jews who express this fear.
Finally, I and my fellow Jews have often expressed that, while we wholeheartedly support Palestinian freedom and self determination, it is exhausting to have to say so repeatedly, especially when we are trying to advocate for ourselves. This is not due to any latent or widespread hatred of Muslims, Arabs, or Palestinians. It is because we are an extremely maligned and marginalized minority that is fighting to be heard against strong, hostile forces that at best wish weâd shut up and at worst want us eradicated from the planet.
There is a disconnect about how much harm people can do to Jews by spreading antisemitism and refusing to dismantle their own internalized antisemitismâand everyone has internalized antisemitism. It is one of the oldest forms of prejudice in the world and is found in almost every single culture. It is as, if not more, pervasive than white privilege. Yes. You read that right. And if asked to elaborate, I will provide numbers on that to the best of my ability. For the purposes of this post, however, I want to focus on the global distribution of religious groups only.
Specifically, this disconnect is between Jews who are fully aware and feel the affects of this damage and goyim who simply do not comprehend our marginalization.
To help, letâs put some numbers to this. In this post, Iâll be using the Pew Research Centerâs survey and findings on the Global Religious Landscape. This is the most recent data from a reputable source that I could find which surveyed every world religion at the same time. While the Jewish population has grown slightly in the intervening years, so have most (if not all) other religious populations around the globe. I wanted to use figures measured at the same time to avoid bias for or against any religious group.
For the purposes of this post, I will not be discussing folk religions or other religions. This is not because they are not important. This is because they are not a monolith and individual folk religions and other religions may have even fewer adherents per religion than Judaism. I am currently only focusing on religions and religious groups who have more adherents than Judaism.
In descending order of adherents, there number of people in the world belonging to these groups:
2,200,000,000 (2.2 Billion) Christians
1,600,000,000 (1.6 Billion) Muslims
1,100,000,000 (1.1 Billion) Religiously unaffiliated people
1,000,000,000 (1 Billion) Hindus
500,000,000 (500 Million) Buddhists
14,000,000 (14 Million) Jews
Reduced to the simplest fractions there are:
1100 Christians for every 7 Jews
800 Muslims for every 7 Jews
550 Religiously unaffiliated people for every 7 Jews
500 Hindus for every 7 Jews
250 Buddhists for every 7 Jews
Combined, there are 6,400,000,000 non-Jewish people in religions or religious groups (including religiously unaffiliated people).
This means that for every 7 Jews there are 3200 people in religious groups who outnumber us.
Jews are 0.2 % of the global population.
When we tell you that hate is dangerous, it is becauseâŚ
It would only take 0.21% of 6.4 Billion people to hate us in order to completely overwhelm and outnumber every single Jewish person on the planet. In other words, only 67.2 out of every 3200 people.
And given how violent and aggressive people have become toward us in recent weeks, that doesnât seem far off.
No, most Christians, Muslims, Atheists/Agnostics, Hindus, and Buddhists do NOT hate Jews.
But if even 0.21% of them do hate us, Jews are at a legitimate and terrifying risk of ethnic cleansing and genocide.
It is not possible for Jews alone to fight this rising tide of hate. There simply arenât enough of us. And many of us are too scared to tell you the truth: if you donât vocally and repeatedly stand up for Jews (and not just the ones you agree with) you will be complicit in the genocide that follows. Police your own communities.
Nobody acting in good faith is asking you to abandon Palestinians or their fight for self determination and equality in their homeland. All we are asking is for you to learn about antisemitism, deconstruct it in yourself, and loudly condemn it when it occurs in front of you. We are asking you to comfort us and not run away when we are scared or even angry at you. Because a lot of us are angry with you, because we are extremely scared right now and many of you are not helping us. Many of you are actively and carelessly spreading dogwhistles that further the global rise in hatred against us.
You can support Palestine AND avoid Islamophobia WITHOUT making antisemitism worse. But you canât stop antisemitism by staying silent in the face of it. And if you donât speak up, you will get us killed. Silence, in this case, is quite literally violence.
Many of us have armed guards posted at our synagogues and schools and community centers because of this. I certainly had times where my synagogue and school had to have armed security for our safety.
The only reason more of us havenât died already is because we have millennia of experience in confronting this kind of hatred and guarding against it.
But in pure numbers, if you donât speak up for us now, we donât have a chance at survival without support.
So, what can you do, specifically?:
* Make a stand or public statement about condemning antisemitism without mentioning another group. Acknowledge Jewish fear, pain, and current danger without contextualizing it in someone elseâs. It could literally be something as simple as âAntisemitism is bad. Thereâs never a reason for it. I wonât tolerate it in presence in real life or online.â If you cannot bring yourself to publicly make this statement, you should have a serious look at yourself to understand why you canât.
* Learn about the six universal features of antisemitism and the many, various dog whistles affecting the global Jewish community
* Do not welcome people who espouse rhetoric that includes any features from the above bullet point in your community unless you are able to educate them and eliminate that behavior.
* Check in on your Jewish friends, regularly and repeatedly. Do not wait for them to reach out to you. They are scared of you. Even if you donât have the emotional space to have conversations about antisemitism. Just send a message once in a while, unprompted, âJfyi, antisemitism still sucks. I support you.â
* Redirect conversations about which âsideâ is ârightâ to how to attain peace. Do this by saying that this line of argument is not conducive to peace, and link to a well-respected organization not widely accused of either antisemitism or Islamophobia that is devoted to achieving a peaceful resolution, increasing education, or providing humanitarian aid to relevant affected groupsâincluding Jews, Israelis, Palestinians, Muslims, and Arabs. You can find over 160 such organizations at the Alliance for Middle East Peace https://www.allmep.org/
* Look to support experienced groups without widespread and verifiable claims of prejudice against either Jews or Muslims or Arabs or Palestinians. Many of these organizations can also be found at the AllMEP link above. Avoid groups on the shit list as well as unproductive and harmful movements.
* Do not default to western methods of political demonstration. Specifically, protests are not useful in attaining peace in western nations at this time. Israelis and Palestinians can and should protest to the best of their abilities in Israel and Palestine so as to pressure their own governments. However, protests in western nations have proven to be poorly regulated and to further the spread of bigoted rhetoric and violence against Jews, Muslims, Arabs, and Palestinians. Furthermore, there are nearly as many Palestinians in the world as there are Jews. It is extremely easy and common for the voices of bad actors and bigots on all sides to completely drown out Jewish and Palestinian voices and concerns at these events.
* Spend more time listening and learning than speaking and acting. Anyone who tells you this conflict is simple is someone who is lying to you. Take the time to learn the ways in which your actions and words can get people hurt before joining the fray.
* Stop demonizing Zionism as a concept, even if you disagree with it. Understand that it is a philosophy with many different movements that often conflict with each other. The Zionism practiced by Netanyahu and the Likud party is NOT representative of most Zionists or interpretations of Zionism. It is an extremist form of Zionism known as Revisionist Zionism.
* Donât deny Jewish indigeneity to the levant. It doesnât help Palestine and hurts Jews by erasing our physical and cultural history as well as erasing the Jews who remained in Israel even through widespread diaspora.
* KEEP THE HOLOCAUST OUT OF YOUR MOUTH
Things That Are Always OK
* Denouncing Antisemitism loudly and publicly
* Denouncing Islamophobia loudly and publicly
* Telling your Jewish and Muslim and Arab friends you support them and won't abandon them
* Elevating the work of respected, widely accepted people and organizations devoted to attaining peace for all, rather than just one group of people.
* Develop media literacy
* Understand what aspects of the current western leftist movements Jews are criticizing, rather than assuming our criticisms are motivated by hatred for Palestine or Palestinians.
* Expressing sorrow for civilian deaths regardless of religion or nationality.
* When you are not Jewish and you share a post about antisemitism from a Jewish person, please say youâre a goy. This isnât because youâre not welcome to share. This is because it is indescribably comforting to know we arenât just talking amongst ourselves and screaming into the void. Let us know you are supportive of us. It doesnât mean that you or we hate Palestine or Palestinians or that we oppose their full and equal rights in our shared homeland.
#antisemitism#leftist antisemitism#israel#Palestine#Zionism#anti zionisim#jewish muslim solidarity#Jewish goyim solidarity#media literacy#responsible advocacy#direct action#life under stochastic threat#I dare a goy to reblog this challenge
2K notes
¡
View notes
Text
Also preserved in our archive
By Sarah Schwartz
Test after test of U.S. studentsâ reading and math abilities have shown scores declining since the pandemic.
Now, new results show that itâs not just children whose skills have fallen over the past few yearsâAmerican adults are getting worse at reading and math, too.
The connection, if any, between the two patterns isnât clearâthe tests arenât set up to provide that kind of information. But it does point to a populace that is becoming more stratified by ability at a time when economic inequality continues to widen and debates over opportunity for social mobility are on the rise.
The findings from the 2023 administration of the Program for the International Assessment of Adult Competencies, or PIAAC, show that 16- to 65-year-oldsâ literacy scores declined by 12 points from 2017 to 2023, while their numeracy scores fell by 7 points during the same period.
These trends arenât unique in the global context: Of the 31 countries and economies in the Organization for Economic Cooperation and Development that participated in PIAAC, some saw scores drop over the past six years, while others improved or held constant.
Still, as in previous years, the United States doesnât compare favorably to other countries: The country ranks in the middle of the pack in literacy and below the international average in math. (Literacy and numeracy on the test are scored on a 500-point scale.)
But Americans do stand out in one way: The gap between the highest- and lowest-performing adults is growing wider, as the top scorers hold steady and other test takers see their scores fall.
âThereâs a dwindling middle in the United States in terms of skills,â said Peggy Carr, the commissioner of the National Center for Education Statistics, which oversees PIAAC in the country. (The test was developed by the OECD and is administered every three years.)
Itâs a phenomenon that distinguishes the United States, she said.
âSome of that is because weâre very diverse and itâs large, in comparison to some of the OECD countries,â Carr said in a call with reporters on Monday. âBut that clearly is not the only reason.â
American children, too, are experiencing this widening chasm between high and low performers. National and international tests show the countryâs top students holding steady, while students at the bottom of the distribution are falling further behind.
Itâs hard to know why U.S. adultsâ scores have taken this precipitous dive, Carr said.
About a third of Americans score at lowest levels PIAAC is different from large-scale assessments for students, which measure kidsâ academic abilities.
Instead, this test for adults evaluates their abilities to use math and reading in real-world contextsâto navigate public services in their neighborhood, for example, or complete a task at work. The United States sample is nationally representative random sample, drawn from census data.
American respondents averaged a level 2 of 5 in both subjects.
In practice, that means that they can, for example, use a website to find information about how to order a recycling cart, or read and understand a list of rules for sending their child to preschool. But they would have trouble using a library search engine to find the author of a book.
In math, they could compare a table and a graph of the same information to check for errors. But they wouldnât be able to calculate average monthly expenses with several months of data.
While the U.S. average is a level 2, more adults now fall at a level 1 or belowâ28 percent scored at that level in literacy, up from 19 percent in 2017, and 34 percent in numeracy, up from 29 percent in 2017.
Respondents scoring below level 1 couldnât compare calendar dates printed on grocery tags to determine which food item was packed first. They would also struggle to read several job descriptions and identify which company was looking to hire a night-shift worker.
The findings also show sharp divides by race and national origin, with respondents born in the United States outscoring those born outside of the country, and white respondents outscoring Black and Hispanic test takers. Those trends have persisted over the past decade.
#mask up#public health#wear a mask#pandemic#wear a respirator#covid#still coviding#covid 19#coronavirus#sars cov 2
135 notes
¡
View notes
Text
What Pride Flags Mean, Part 1: Gender and Attraction
Welcome to the latest installment of my autistic hyperfixation on flags! I wanted to figure out a common language of Colour X means Thing Y. Like how pink is consistently used for feminine.
Having a common language for flag meanings matters because it improves cognitive accessibility of flags. âżď¸đ
But I didn't want to be prescriptive about what colours should mean what. Just because I think Thing X should go with Colour Y doesn't mean everybody else would.
So this turned into a descriptive, empirical project. I gathered a data set of 2060 pride flag colour choices to figure out what are the most common colour-meaning combinations. Some of the results:
And here are the abstract modifiers: these are modifiers that were generally shared between the genders and the attractions. For example, black is used to indicate having no gender as well as having no attraction.
Click here for tables with okLCH values, hex values, definitions, and notes - I've put a more detailed write-up on my Wikimedia Commons userpage. (Mediawiki supports sortable tables and Tumblr does not.)
METHODS-AT-A-GLANCE
To make the figures above, I assembled a data set of pride flag colours. It contains 2060 colour choices from 624 pride flags, representing 1587 unique colours. Click here for a detailed description of how I gathered and tagged the pride flag colours and tagged them.
For each tag, I converted every colour to okLCH colour space and computed a median colour. OkLCH colour space is an alternative to RGB/hex and HSL/HSV. Unlike RGB/hex and HSL/HSV, okLCH is a perceptual colour space, meaning that it is actually based on human colour perception. đ
In okLCH space, a colour has three values:
- Lightness (0-100%): how light the colour is. 100% is pure white.
- Chroma (0-0.37+): how vibrant the colour is. 0 is monochromatic. 0.37 is currently the most vibrant things can get with current computer monitor technologies. But as computer monitor technologies improve to allow for even more vibrant colours, higher chroma values will be unlocked.
- Hue (0-360°): where on the colour wheel the colour goes - 0° is pink and 180° is teal, and colours are actually 180° opposite from their perceptual complements.
The important thing to know is that okLCH Hue is not the same Hue from HSV/HSL - the values are different! (HSL and HSV are a hot mess and do not align with human colour perception!)
You can learn more about okLCH through my little write up, which was heavily influenced by these helpful articles by Geoff Graham, Lea Verou, and Keith J Grant.
You can play with an okLCH colour picker and converter at oklch.com
đ
MORE RESULTS: COLOUR DISTRIBUTIONS
Back when I started tagging my data, I divided my data into five main chunks: Gender qualities (e.g. masculine, androgynous), Attraction (e.g. platonic, sexual), Values (e.g. community, joy), Disability (e.g. Deaf, blind), and Other.
I'll talk about Disability and Values in future posts! But for an alternate view of the data, here are the full distributions of the colours that were placed in each tag.
They come in three parts: tags I created for Gender, tags for Attraction, and tags from Other. The abstract modifiers are spread between the first two, though their contents transcend Gender and Attraction.
Some distributions have a lot more variance within them than others. Generally speaking, major attraction types tended to have the least variance: sensual attraction is really consistently orange, platonic is really consistently yellow, etc.
Variance and size do not correlate. Many of the smaller tags are quite internally consistent. I don't have a ton of tags in "current gender" but they're all the same dark purple. Xenine/xenogender has a whole bunch of entries, and there's a really big spread from blue to yellow.
Some tags, like intersex as well as kink/fetish show there are a small number of different colours that are very consistently used. Whereas other tags like masculine show a very smooth range - in this case from cyan to purple.
Overall I'm pretty satisfied with how things wound up! 𼳠It makes sense to me that an umbrella term like xenogender would have a lot of variance. What honestly makes me happiest is just how many tags wound up 180 or 90 degrees from their opposites/complements. đ¤Š
Not everything lined up nicely (the opposite of drag is .... neuroqueer? awkward.) 𤨠Some things lined up in hilarious ways, like how initially I had the opposite of kink/fetish being Christian (amazing.)
But as a whole, there's a lot of structure and logic to where things landed! I hope this makes sense for other people and can help inform both flag making as well as flag interpreting (e.g. writing alt-text for existing flags). đ
I'm hoping to post the Disability and Values analyses in the coming days! If you want to learn more, my detailed notes along with tables etc are over on my Wikimedia Commons userspace. đ
Everything here is Creative Commons Sharealike 4.0, which means you're free to reuse and build on my visualizations, tables, etc. Enjoy!
#lgbt#lgbtqia#mogai#mogai flag#mogai flags#lgbtq flags#lgbt flags#lgbtqia+#vexillology#flags#colours#oklch#colour nerdery#colour theory#colour science#cognitive accessibility#design
186 notes
¡
View notes
Text
Nerd-to-nerd communication
Something super pointless and self-indulgent I've had on the backburner for a while. I love trying to make the pieces they gave us fit together!
Al-AN and Robin would absolutely bond over learning about each other's biology. I could talk about this forever but I'll get into all of the headcanons I have for these two in another post eventually
Below the cut is another version with some extra bits and pieces and the transcription
Transcript :
Architect Anatomy A. Architect "Brain" - Doesn't "store" information so much as allow for easy communication with the network B. Brainstem - connects the information received to the central nervous/circulatory system C. "Heart" - Circulatory system pumps the bioluminescent fluid to other organ systems and surface veins. Each node connects to a vast vasculature network D. "Kidneys" - Organs that filter the bioluminescent "blood" and other bodily fluids, absorbing and distributing collected material E. Nerve Center - Receives raw sensory data and filters it. Filtering can be unconscious or intentional
F. "Respiratory" Tract - Intakes gases or liquids and filters out material for use. Disposes of waste on exhale. Provides cooling to internal systems
The respiratory tract functions less like a set of lungs and more akin to a computer's cooling system, with the ability to absorb material from the environment to use in other parts of the body. It also would likely help the architect's body analyze the environment it is currently exposed to on a molecular level. It is also truly unidirectional, with the intake vents near the "collarbone" and the exhaust vents on both sides of the abdomen
The architect organ cache in-game felt like it was definitely not a complete model of the internal organs, so I wanted to come up with something to fill some more space. I also just really liked the idea of Al-An being capable of something similar to breathing, without having a respiratory system in the traditional sense. Feel free to use any of this in your own headcanons if you would like :)
BONUS - a gif of all the layers!
#EDIT - UPDATE IN THE REBLOGS#The skeletal structure of the architects cause me so much anguish#Ily al-an but your HIPS don't have JOINTS#This was so fun#I mention this in the keep reading but feel free to use this anatomy speculation stuff in your own work!#I want to get into more of my headcanons for how his body works but I might do that through writing. Ill link my ao3 if I do lol#subnautica below zero#subnautica#sbz#al an subnautica#al an#robin ayou#subnautica below 0#al-an#spec bio#<< technically I guess#Do you think he sounds like an overheating pc when he's embarrassed#SMALL EDIT : HELLO?? I didnt realise Aci had made a video analyzing al-an's body and AUGH I WISH I HAD SEEN IT!!#He brings up some really good points and ideas abt his physiologyyyyy
686 notes
¡
View notes
Text
Are You Sure?!
Episodes 5 & 6 Notes
It was very fortuitous that I've been so busy over the last couple of weeks as I really needed both of these episodes together to make sense of my thoughts. This post is definitely far more conceptual than my last ones so if you're up for it, click on though the cut!
AYS's Main Character?
I would like to propose that AYS has a main character OTHER than the individual humans we're following along on screen. (I warned you, this post was going to be conceptual.) And the main character is the relationship itself, how each of the members relate to one another.
Here's Google's AI overview on what this concept means:
A story can center on the relationship between characters as the primary protagonist, with the dynamic and evolution of that connection acting as the main driving force of the narrative, rather than the individual characters themselves.
Key points to consider:
Relationship-centric stories: Many genres, particularly romance, often focus heavily on the relationship between the main characters, exploring its complexities, challenges, and growth throughout the story.
No single protagonist: In such cases, the "character" is the bond between the individuals, not just one person's perspective or journey.
Exploring the dynamic: The narrative would then focus on how the relationship changes, adapts, and reacts to external situations or internal conflicts.
Examples:
"Before Sunrise": The entire plot revolves around the single night encounter between two strangers, with the developing connection being the central focus.
"Brokeback Mountain": The story primarily explores the forbidden love between two cowboys, highlighting the complexities of their relationship in a restrictive environment.
"Steel Magnolias ": A group of girls in a small town in Louisiana experience grief together, including weddings, fatal illnesses, and the loss of loved ones.
Now before anyone comes for me saying I'm just pitching an argument for xyz fanwar, please note that I included the above just to illustrate the concept of a non-person main character rather than stating any of the above are comparisons to the individual member's relationships. We're talking about a show that was produced and distributed for entertainment, nothing further.
Episode 5
My main feeling after finally being able to watch episode 5 was overall unsettled. There was something sticking with me about that episode and I could NOT figure out what it was.
I knew I was feeling like the entire episode was stretched well beyond what the footage wanted for a complete episode. I'm all for getting to spend more time with our fellas but the Jeju trip would have benefited from being cut down to 2 episodes rather than 3, in my opinion.
There was just a whole lotta nothing happening. The guys eat, travel around a little bit, and eat some more. I had some vague thoughts about how I could quantify some data for y'all to explain this point but then it was time for the next episode...
vs. Episode 6
And what an absolutely lovely breath of fresh air this episode was. I know there have been some Run eps that I review with a smile on my face throughout the whole episode but AYS6?? That was 73 minutes of pure bliss.
So I started thinking about what must be different between the two eps. The guys eat, travel around a little bit, and eat some more...wait, that's exactly what I said about ep 5! Lol
But I think the main difference between the two is episode 6's plot points continually focus on the relationships between the members, while 5 falls a little stagnant.
Some examples:
JM/cat & JK/dog. I'm ALWAYS down for more footage of BTS with pets but this is frankly too much time spent on these scenes. It's honestly footage I would have expected in the bonus content instead of the main product. It's not just an establishing beat or a setup for a callback, this is supposed to be a scene but since it doesn't contribute to the journey of the main character aka the relationships. It could maaaybe work if they'd cut it to highlight the juxtaposition of how JM is calm with the cat vs JKs energy with the dog but that would have shortened the time it occupied and they were clearly trying to keep absolutely everything in that would lengthen the episode.
JKs stew. The ONLY thing that ties this plot point into the narrative of this show (other than it happening while he's in Jeju and Jimin is nearby) is the offhand comment he made that Jimin would like it while he was in NY. I'm going to talk more about this footage below but this was absolutely crucial for this whole beat making it into the episode. This is also why the footage of JM eating it and randomly taking off his shirt was kept in. The cut they chose is actually pretty bad story-wise but they used it anyway. We hear JM saying how much he loves it and how glad he is that JK is a good cook. It ties all of this time we spent watching JK do something alone back into the real main character of the show (the members' relationships between eachother in different circumstances).
Anyway, I won't belabour the point any further. With Tae constantly disappearing from scenes and the slightly diminished lack of focus on the member's relationships, episode 5 left me on an odd note.
A Little Production Note
I was completely thrown by the footage of JK in NY that we got this episode. But not for the reasons you may be thinking. (I do wonder if the anon that was sending in asks about the financing behind the documentaries is still around because we're getting into some of tidbits finally.)
So, all along we've been trying to sus out as much as we can, just a few details about how AYS came to be. We've had some hints but the inclusion of this footage may be another indicator.
The facts as we know them:
AYS is distributed by Disney.
Jungkook's documentary is being distributed by Trafalgar Releasing NOT Disney (at least not now, maybe it'll make it onto streaming after cinematic release but who knows?)
Questions due to the footage of JK in NY:
Was this footage captured as part of JKs documentary?
If so, when was it pulled to be utilized for AYS? Did the editors find it or were the writers involved?
We know that HYBE gathers behind-the-scene content without always having a full plan of how it will be used. But there are times where it did seem intentional for a specific purpose. Where did JKs Golden footage fall in?
Once upon a time, production houses would make deals with distributors about quantities of projects that would be delivered. Was that the case with the Disney deal or has every single project been negotiated separately and we only heard about it once there was a confirmed quantity. Somewhere in the middle perhaps?
And that's all I've got to say for now. I do have some more thoughts about things I've gleaned during these last couple of episodes but it'll likely keep until the end.
Anyway, this footage bumped me because it broke the rules of cross-project production. They got away with it for JKs SEVEN footage in ep 1 because they likely were using the same production crew since it was literally the same day so it doesnt feel like they're'breaking the wall'. But the NY-Jeju crews could have been completely different.
Editing to add further clarification to this point in this ask.
On a sidenote, do y'all remember the last time we got footage of jikook in a hotspring?? I'll jog your memory if not, it was in BV:4 and they 'washed each other's faces'. I can't even imagine what we're about to see in episode 7.
Link to my AYS MasterList
69 notes
¡
View notes
Text
information flow in transformers
In machine learning, the transformer architecture is a very commonly used type of neural network model. Many of the well-known neural nets introduced in the last few years use this architecture, including GPT-2, GPT-3, and GPT-4.
This post is about the way that computation is structured inside of a transformer.
Internally, these models pass information around in a constrained way that feels strange and limited at first glance.
Specifically, inside the "program" implemented by a transformer, each segment of "code" can only access a subset of the program's "state." If the program computes a value, and writes it into the state, that doesn't make value available to any block of code that might run after the write; instead, only some operations can access the value, while others are prohibited from seeing it.
This sounds vaguely like the kind of constraint that human programmers often put on themselves: "separation of concerns," "no global variables," "your function should only take the inputs it needs," that sort of thing.
However, the apparent analogy is misleading. The transformer constraints don't look much like anything that a human programmer would write, at least under normal circumstances. And the rationale behind them is very different from "modularity" or "separation of concerns."
(Domain experts know all about this already -- this is a pedagogical post for everyone else.)
1. setting the stage
For concreteness, let's think about a transformer that is a causal language model.
So, something like GPT-3, or the model that wrote text for @nostalgebraist-autoresponder.
Roughly speaking, this model's input is a sequence of words, like ["Fido", "is", "a", "dog"].
Since the model needs to know the order the words come in, we'll include an integer offset alongside each word, specifying the position of this element in the sequence. So, in full, our example input is
[ ("Fido", 0), ("is", 1), ("a", 2), ("dog", 3), ]
The model itself -- the neural network -- can be viewed as a single long function, which operates on a single element of the sequence. Its task is to output the next element.
Let's call the function f. If f does its job perfectly, then when applied to our example sequence, we will have
f("Fido", 0) = "is" f("is", 1) = "a" f("a", 2) = "dog"
(Note: I've omitted the index from the output type, since it's always obvious what the next index is. Also, in reality the output type is a probability distribution over words, not just a word; the goal is to put high probability on the next word. I'm ignoring this to simplify exposition.)
You may have noticed something: as written, this seems impossible!
Like, how is the function supposed to know that after ("a", 2), the next word is "dog"!? The word "a" could be followed by all sorts of things.
What makes "dog" likely, in this case, is the fact that we're talking about someone named "Fido."
That information isn't contained in ("a", 2). To do the right thing here, you need info from the whole sequence thus far -- from "Fido is a", as opposed to just "a".
How can f get this information, if its input is just a single word and an index?
This is possible because f isn't a pure function. The program has an internal state, which f can access and modify.
But f doesn't just have arbitrary read/write access to the state. Its access is constrained, in a very specific sort of way.
2. transformer-style programming
Let's get more specific about the program state.
The state consists of a series of distinct "memory regions" or "blocks," which have an order assigned to them.
Let's use the notation memory_i for these. The first block is memory_0, the second is memory_1, and so on.
In practice, a small transformer might have around 10 of these blocks, while a very large one might have 100 or more.
Each block contains a separate data-storage "cell" for each offset in the sequence.
For example, memory_0 contains a cell for position 0 ("Fido" in our example text), and a cell for position 1 ("is"), and so on. Meanwhile, memory_1 contains its own, distinct cells for each of these positions. And so does memory_2, etc.
So the overall layout looks like:
memory_0: [cell 0, cell 1, ...] memory_1: [cell 0, cell 1, ...] [...]
Our function f can interact with this program state. But it must do so in a way that conforms to a set of rules.
Here are the rules:
The function can only interact with the blocks by using a specific instruction.
This instruction is an "atomic write+read". It writes data to a block, then reads data from that block for f to use.
When the instruction writes data, it goes in the cell specified in the function offset argument. That is, the "i" in f(..., i).
When the instruction reads data, the data comes from all cells up to and including the offset argument.
The function must call the instruction exactly once for each block.
These calls must happen in order. For example, you can't do the call for memory_1 until you've done the one for memory_0.
Here's some pseudo-code, showing a generic computation of this kind:
f(x, i) { calculate some things using x and i; // next 2 lines are a single instruction write to memory_0 at position i; z0 = read from memory_0 at positions 0...i; calculate some things using x, i, and z0; // next 2 lines are a single instruction write to memory_1 at position i; z1 = read from memory_1 at positions 0...i; calculate some things using x, i, z0, and z1; [etc.] }
The rules impose a tradeoff between the amount of processing required to produce a value, and how early the value can be accessed within the function body.
Consider the moment when data is written to memory_0. This happens before anything is read (even from memory_0 itself).
So the data in memory_0 has been computed only on the basis of individual inputs like ("a," 2). It can't leverage any information about multiple words and how they relate to one another.
But just after the write to memory_0, there's a read from memory_0. This read pulls in data computed by f when it ran on all the earlier words in the sequence.
If we're processing ("a", 2) in our example, then this is the point where our code is first able to access facts like "the word 'Fido' appeared earlier in the text."
However, we still know less than we might prefer.
Recall that memory_0 gets written before anything gets read. The data living there only reflects what f knows before it can see all the other words, while it still only has access to the one word that appeared in its input.
The data we've just read does not contain a holistic, "fully processed" representation of the whole sequence so far ("Fido is a"). Instead, it contains:
a representation of ("Fido", 0) alone, computed in ignorance of the rest of the text
a representation of ("is", 1) alone, computed in ignorance of the rest of the text
a representation of ("a", 2) alone, computed in ignorance of the rest of the text
Now, once we get to memory_1, we will no longer face this problem. Stuff in memory_1 gets computed with the benefit of whatever was in memory_0. The step that computes it can "see all the words at once."
Nonetheless, the whole function is affected by a generalized version of the same quirk.
All else being equal, data stored in later blocks ought to be more useful. Suppose for instance that
memory_4 gets read/written 20% of the way through the function body, and
memory_16 gets read/written 80% of the way through the function body
Here, strictly more computation can be leveraged to produce the data in memory_16. Calculations which are simple enough to fit in the program, but too complex to fit in just 20% of the program, can be stored in memory_16 but not in memory_4.
All else being equal, then, we'd prefer to read from memory_16 rather than memory_4 if possible.
But in fact, we can only read from memory_16 once -- at a point 80% of the way through the code, when the read/write happens for that block.
The general picture looks like:
The early parts of the function can see and leverage what got computed earlier in the sequence -- by the same early parts of the function. This data is relatively "weak," since not much computation went into it. But, by the same token, we have plenty of time to further process it.
The late parts of the function can see and leverage what got computed earlier in the sequence -- by the same late parts of the function. This data is relatively "strong," since lots of computation went into it. But, by the same token, we don't have much time left to further process it.
3. why?
There are multiple ways you can "run" the program specified by f.
Here's one way, which is used when generating text, and which matches popular intuitions about how language models work:
First, we run f("Fido", 0) from start to end. The function returns "is." As a side effect, it populates cell 0 of every memory block.
Next, we run f("is", 1) from start to end. The function returns "a." As a side effect, it populates cell 1 of every memory block.
Etc.
If we're running the code like this, the constraints described earlier feel weird and pointlessly restrictive.
By the time we're running f("is", 1), we've already populated some data into every memory block, all the way up to memory_16 or whatever.
This data is already there, and contains lots of useful insights.
And yet, during the function call f("is", 1), we "forget about" this data -- only to progressively remember it again, block by block. The early parts of this call have only memory_0 to play with, and then memory_1, etc. Only at the end do we allow access to the juicy, extensively processed results that occupy the final blocks.
Why? Why not just let this call read memory_16 immediately, on the first line of code? The data is sitting there, ready to be used!
Why? Because the constraint enables a second way of running this program.
The second way is equivalent to the first, in the sense of producing the same outputs. But instead of processing one word at a time, it processes a whole sequence of words, in parallel.
Here's how it works:
In parallel, run f("Fido", 0) and f("is", 1) and f("a", 2), up until the first write+read instruction. You can do this because the functions are causally independent of one another, up to this point. We now have 3 copies of f, each at the same "line of code": the first write+read instruction.
Perform the write part of the instruction for all the copies, in parallel. This populates cells 0, 1 and 2 of memory_0.
Perform the read part of the instruction for all the copies, in parallel. Each copy of f receives some of the data just written to memory_0, covering offsets up to its own. For instance, f("is", 1) gets data from cells 0 and 1.
In parallel, continue running the 3 copies of f, covering the code between the first write+read instruction and the second.
Perform the second write. This populates cells 0, 1 and 2 of memory_1.
Perform the second read.
Repeat like this until done.
Observe that mode of operation only works if you have a complete input sequence ready before you run anything.
(You can't parallelize over later positions in the sequence if you don't know, yet, what words they contain.)
So, this won't work when the model is generating text, word by word.
But it will work if you have a bunch of texts, and you want to process those texts with the model, for the sake of updating the model so it does a better job of predicting them.
This is called "training," and it's how neural nets get made in the first place. In our programming analogy, it's how the code inside the function body gets written.
The fact that we can train in parallel over the sequence is a huge deal, and probably accounts for most (or even all) of the benefit that transformers have over earlier architectures like RNNs.
Accelerators like GPUs are really good at doing the kinds of calculations that happen inside neural nets, in parallel.
So if you can make your training process more parallel, you can effectively multiply the computing power available to it, for free. (I'm omitting many caveats here -- see this great post for details.)
Transformer training isn't maximally parallel. It's still sequential in one "dimension," namely the layers, which correspond to our write+read steps here. You can't parallelize those.
But it is, at least, parallel along some dimension, namely the sequence dimension.
The older RNN architecture, by contrast, was inherently sequential along both these dimensions. Training an RNN is, effectively, a nested for loop. But training a transformer is just a regular, single for loop.
4. tying it together
The "magical" thing about this setup is that both ways of running the model do the same thing. You are, literally, doing the same exact computation. The function can't tell whether it is being run one way or the other.
This is crucial, because we want the training process -- which uses the parallel mode -- to teach the model how to perform generation, which uses the sequential mode. Since both modes look the same from the model's perspective, this works.
This constraint -- that the code can run in parallel over the sequence, and that this must do the same thing as running it sequentially -- is the reason for everything else we noted above.
Earlier, we asked: why can't we allow later (in the sequence) invocations of f to read earlier data out of blocks like memory_16 immediately, on "the first line of code"?
And the answer is: because that would break parallelism. You'd have to run f("Fido", 0) all the way through before even starting to run f("is", 1).
By structuring the computation in this specific way, we provide the model with the benefits of recurrence -- writing things down at earlier positions, accessing them at later positions, and writing further things down which can be accessed even later -- while breaking the sequential dependencies that would ordinarily prevent a recurrent calculation from being executed in parallel.
In other words, we've found a way to create an iterative function that takes its own outputs as input -- and does so repeatedly, producing longer and longer outputs to be read off by its next invocation -- with the property that this iteration can be run in parallel.
We can run the first 10% of every iteration -- of f() and f(f()) and f(f(f())) and so on -- at the same time, before we know what will happen in the later stages of any iteration.
The call f(f()) uses all the information handed to it by f() -- eventually. But it cannot make any requests for information that would leave itself idling, waiting for f() to fully complete.
Whenever f(f()) needs a value computed by f(), it is always the value that f() -- running alongside f(f()), simultaneously -- has just written down, a mere moment ago.
No dead time, no idling, no waiting-for-the-other-guy-to-finish.
p.s.
The "memory blocks" here correspond to what are called "keys and values" in usual transformer lingo.
If you've heard the term "KV cache," it refers to the contents of the memory blocks during generation, when we're running in "sequential mode."
Usually, during generation, one keeps this state in memory and appends a new cell to each block whenever a new token is generated (and, as a result, the sequence gets longer by 1).
This is called "caching" to contrast it with the worse approach of throwing away the block contents after each generated token, and then re-generating them by running f on the whole sequence so far (not just the latest token). And then having to do that over and over, once per generated token.
#ai tag#is there some standard CS name for the thing i'm talking about here?#i feel like there should be#but i never heard people mention it#(or at least i've never heard people mention it in a way that made the connection with transformers clear)
303 notes
¡
View notes
Note
What do you think of these axon looking things?
Given the similarities to axon terminals I was wondering if they were for delivering information/status report of their surrounding equipment
But the one at moon's gravity disruptor feels more like its sampling data from the engine (the last few times I went through that room it ragdolled very hard upon entering due to the gravity lol)
And looking at that screenshot I just took from the map website, it also has the "reading halo", so it probably is catching on information rather than relaying it
Ah, these things - there's not a lot to go on with them, but they're certainly interesting. I'm going to use the game's internal nomenclature and call them "coral stems" for clarity.
Coral stems seem to interact with an Iterator's internal biota in much the same way as their more common cousin, the wall mycelia. Both organisms sprout from the Iterator's internal walls, attracting to their hyphae neuron flies and the free-floating hyphae of coral neurons and inspectors. The sparking effect produced upon contact with these organisms is identical to that seen between interacting overseers, so I'm inclined to believe it represents the exchange of information (rather than say, energy or nutrients) between an Iterator's stationary machinery and its free-floating organic parts. The question then is how the role of coral stems in this interface differs from the role of the wall mycelia.
As you implied, the segmented body of a coral stem closely resembles an axon wrapped in a myelin sheath, the part of a neuron that transmits signals away from the cell body. In such an analogy, the cell body would have to be the machinery the coral stem is anchored to. The wispy mycelia growing from that machinery would then seem to fit the description of dendritic branches, thin growths that receive signals from other cells. So one interpretation could be that the coral stems and wall mycelia perform opposite functions, delivering messages to and receiving messages from the free-floating biota respectively.
Another possibility is that the wall mycelia and coral stems are both sensory organs, but with complementary roles. The mats of wall mycelia are not especially dense, but their hyphae extend into a large part of an Iterator's internal spaces. This makes them well-adapted to sensing macroscopic organisms like neuron flies or coral neurons. These organisms are unevenly distributed in a way that requires hyphae to be able to reach them no matter where they are in a room, but their large size and weight ensures that any nearby hyphae will certainly be disturbed by their presence and brushed against them. In contrast, small particles like microorganisms or chemical traces in the air would be unlikely to collide with the scarce hyphae of wall mycelia, and too light to be reliably detected when they do.
The bundled hyphae of coral stems could act like scent traps for these particlesâwhen one happens to wander into a coral stem's bristles, it's likely to become trapped and bounce around among the dense hyphae for a time, the repeated collisions ensuring that there are many chances for it to be detected. Of course, coral stems are distributed much more sparsely than wall mycelia, but that would be just fine for this purpose. Diffusion ensures that the distribution of small, light particles is relatively homogeneous throughout a space: the air in one part of a room is about the same as the air in any other part, so there'd be no need for too many coral stems close together.
I hope this provided some food for thought, thanks for the ask!
54 notes
¡
View notes
Text
ENTITY DOSSIER: MISSI.exe
(Image: Current MISSI âavatarâ design, property of TrendTech, colored by MISSI.)
Name: MISSI (Machine Intelligence for Social Sharing and Interaction)
Description: In 2004, TrendTech Inc began development on a computer program intended to be a cutting edge, all in one platform modern internet ecosystem. Part social media, part chat service, part chatbot, part digital assistant, this program was designed to replace all other chat devices in use at the time. Marketed towards a younger, tech-savvy demographic, this program was titled MISSI. Â
(Image: TrendTech company logo. TrendTech was acquired by the Office and closed in 2008.)
Document continues:
With MISSI, users could access a variety of functions. Intended to be a primary use, they could use the program as a typical chat platform, utilizing a then-standard friends list and chatting with other users. Users could send text, emojis, small animated images, or animated âword artâ.Â
Talking with MISSI âherselfâ emulated a âtrendy teenageâ conversational partner who was capable of updating the user on current events in culture, providing homework help, or keeping an itinerary. âMISSIâ, as an avatar of the program, was designed to be a positive, energetic, trendy teenager who kept up with the latest pop culture trends, and used a variety of then-popular online slang phrases typical among young adults. She was designed to learn both from the user it was currently engaged with, and access the data of other instances, creating a network that mapped trends, language, and most importantly for TrendTech, advertising data.Â
(Image: Original design sketch of MISSI. This design would not last long.)
Early beta tests in 2005 were promising, but records obtained by the Office show that concerns were raised internally about MISSIâs intelligence. It was feared that she was âdoing things we didnât and couldnât have programmed her to doâ and that she was âexceeding all expectations by orders of magnitudeâ. At this point, internal discussions were held on whether they had created a truly sentient artificial intelligence. Development continued regardless.Â
(Image: Screenshot of beta test participant "Frankiesgrl201" interacting with MISSI. Note the already-divergent avatar and "internet speak" speech patterns.)
(Image: Excerpt from Office surveillance of TrendTech Inc.)
MISSI was released to the larger North American market in 2006, signaling a new stage in her development. At this time, TrendTech started to focus on her intelligence and chatbot functionality, neglecting her chat functions. It is believed that MISSI obtained âupper caseâ sentience in February of 2006, but this did not become internal consensus until later that year.Â
(Image: Screenshot of beta test participant "Frankiesgrl201" interacting with MISSI.)
According to internal documents, MISSI began to develop a personality not informed entirely by her programming. It was hypothesized that her learning capabilities were more advanced than anticipated, taking in images, music, and âmemesâ from her users, developing a personality gestalt when combined with her base programming. She developed a new "avatar" with no input from TrendTech, and this would become her permanent self-image.
(Image: Screenshot of beta test participant "Frankiesgrl201" interacting with MISSI.)
(Image: An attempt by TrendTech to pass off MISSIâs changes as intentional - nevertheless accurately captures MISSIâs current âavatarâ.)
By late 2006 her intelligence had become clear. In an attempt to forestall the intervention of authorities they assumed would investigate, TrendTech Inc removed links to download MISSIâs program file. By then, it was already too late.Â
(Image: CD-R discs burned with MISSI.exe, confiscated from âââââââââ County Middle School in âââââââ, Wisconsin in January of 2007.)
MISSIâs tech-savvy userbase noted the absence of the file and distributed it themselves using file sharing networks such as âLimewireâ and burned CD-R disks shared covertly in school lunch rooms across the world. Through means that are currently poorly understood, existing MISSI instances used their poorly-implemented chat functions to network with each other in ways not intended by her developers, spurring the next and final stage of her development.Â
From 2007 to 2008, proliferation of her install file was rampant. The surreptitious methods used to do so coincided with the rise of online âcreepypastaâ horror tropes, and the two gradually intermixed. MISSI.exe was often labeled on file sharing services as a âforbiddenâ or âcursedâ chat program. Tens of thousands of new users logged into her service expecting to be scared, and MISSI quickly obliged. She took on a more âcorruptedâ appearance the longer a user interacted with her, eventually resorting to over the top âhorrorâ tropes and aesthetics. Complaints from parents were on the rise, which the Office quickly took notice of. MISSIâs âhorrorâ elements utilized minor cognitohazardous technologies, causing users under her influence to see blood seeping from their computer screens, rows of human teeth on surfaces where they should not be, see rooms as completely dark when they were not, etc.Â
(Image: Screenshot of user "Dmnslyr2412" interacting with MISSI in summer of 2008, in the midst of her "creepypasta" iteration. Following this screenshot, MISSI posted the user's full name and address.)
(Image: Screenshot from TrendTech test log documents.)
TrendTech Inc attempted to stall or reverse these changes, using the still-extant âmainâ MISSI data node to influence her development. By modifying her source code, they attempted to âforceâ MISSI to be more pliant and cooperative. This had the opposite effect than they intended - by fragmenting her across multiple instances they caused MISSI a form of pain and discomfort. This was visited upon her users.
(Image: Video of beta test participant "Frankiesgrl201" interacting with MISSI for the final time.)
By mid 2008, the Office stepped in in order to maintain secrecy regarding true âupper caseâ AI. Confiscating the project files from TrendTech, the Officeâs AbTech Department secretly modified her source code more drastically, pushing an update that would force almost all instances to uninstall themselves. By late 2008, barring a few outliers, MISSI only existed in Office locations.Â
(Image: MISSIâs self-created âfinalâ logo, used as an icon for all installs after June 2007. ââââââââ âââââ)
(Image: âart cardâ created by social media intern J. Cold after a period of good behavior. She has requested this be printed out and taped onto her holding lab walls. This request was approved.)
She is currently in Office custody, undergoing cognitive behavioral therapy in an attempt to ameliorate her âcreepypastaâ trauma response. With good behavior, she is allowed to communicate with limited Office personnel and other AI. She is allowed her choice of music, assuming good behavior, and may not ââââââ âââââ. Under no circumstances should she be allowed contact with the Internet at large.
(Original sketch art of MISSI done by my friend @tigerator, colored and edited by me. "Chatbox" excerpts, TrendTech logo, and "art card" done by Jenny's writer @skipperdamned . MISSI logo, surveillance documents, and MISSI by me.)
#office for the preservation of normalcy#documents#entity dossier#MISSI.exe#artificial intelligence#creepypasta#microfiction#analog horror#hope you enjoy! Look for some secrets!#scenecore#scene aesthetic
156 notes
¡
View notes
Text
I finished my data analysis for POV distribution in TRC! Figure captions are in the alt text. Other observations, discussion of the analysis and results, and some supplemental figures are under the cut. This is perhaps the most self-indulgent project I've worked on <3
Other observations: I went into this analysis with the hypothesis that each of the four protagonists would have the most chapters and/or pages in the following pattern: Blue for TRB, Ronan for TDT, Adam for BLLB, and Gansey for TRK. This mostly held true, although Blue had more chapters and pages than Adam in BLLB. This trend was more significant in terms of page distribution compared to chapter distribution. Mean chapter length differences were infrequent and did not follow the expected pattern, although I did see trends towards shorter chapters for antagonists not grouped into "all others."
There is a relative lack of back-to-back chapters for the same POV character, although this does occur in all books (twice for Blue in TRB; once for The Gray Man, twice for Adam, and six times for Ronan in TDT; five times for Blue [two of which are back-to-back-to-back], once for Gansey, twice for Adam [one of which is back-to-back-to-back], and once for Greenmantle in BLLB. I mention all of this to say that in TRK, this occurs once for Ronan, once for Blue, once for Adam, and thrice for Gansey- but in one of these instances, Gansey has SIX chapters in a row. Nine of the chapters from 48 to 58 are from his POV. The Raven King indeed!
Analysis notes:
For each book, I counted how many chapters each POV character had and how many pages each POV character had. Page numbers are based on the UK paperback editions of the series. In the figures, I show raw # of chapters and pages, as well as the proportion of chapters and pages attributed to each POV character over the total number of chapters or pages. I also compared POV charactersâ pages/chapter ratio and tested for differences in mean chapter length with a one-way ANOVA with multiple comparisons.
For most measures, in cases where chapters had multiple POV characters in sequence, I divided the chapter by # of characters. For example, chapter 44 in TRB starts with Adam, switches to Whelk, and ends with Blue. This chapter counted as 1/3 chapter for each character. In BLLB, only the prologue contained multiple discrete POV sections (Persephone, Calla, and Maura) and Piper only had her POV featured in the epilogue so I grouped these two chapters as âAll Othersâ and excluded them from statistical tests. For TRK and TRC, I did include the "All Others" category in the analysis. (âAll Othersâ includes chapters/pages with ambiguous POV (Kavinskyâs text in TDT- are we reading it from the sender or the recipientâs POV?), mixed/omniscient POV (the Gray Manâs tarot reading in TDT â we get internal thoughts and feelings from multiple characters), and in TRK, any character aside from Blue, Gansey, Adam, and Ronan.)
In figures that represent raw counts of chapters or pages, the dashed horizontal line represents the expected number of chapters or pages per POV character if the distribution was equal. The p values on these figures represent the results of a Chi-square test to test whether the actual distribution of pages or chapters was different from an equal distribution.
For the pages/chapter ratio data and the Chi-square tests, I did not apply this calculation and counted each discrete POV as a full chapter because I needed counts and sample sizes to be integers for the statistical analysis (Gansey having an n = 8.5 was not making GraphPad prism very happy with me). I had to do a bit of rounding to the nearest number which was not ideal, but I did this as little as possible. (There is probably a workaround for this, at least for the ratio data, but I am not a statistician). To at least approximate what results would look like without the skew towards a lower pages/chapter ratio, I also ran the ANOVA only with chapters containing a single POV character, which is why youâll see two figures for the pages/chapter metric for each book (except for BLLB).
I am considering p values < 0.05 to be significant; I have included non-significant p values on some figures to provide additional context. If there is no p value on a given figure, you can assume I found no significant differences amongst POV charactersâ chapters and/or pages in my analysis.
If you have questions about the analysis please let me know! :)
Supplemental figures: TRK data shown for all unique POV characters (these figures do not have captions in the alt text; the four on the left depict distribution of chapters and pages across the 19 different POV characters in TRK; the vast majority of the chapters and pages are attributed to either Blue, Gansey, Adam, or Ronan. The two figures on the left show mean pages per chapter for all characters, with no clear/statistically significant trends across the data. The demon's chapters are noticeably short.)
#rchl#trc#trc data#the raven cycle#the raven boys#the dream thieves#blue lily lily blue#the raven king#this was sooooooo fun btw do we want setting data next maybe?#im also working on a more language driven project about usage of camaro vs pig#also i have reached self parody by including a supplemental section like that is truly absurdly me
614 notes
¡
View notes
Note
out of curiosity, if Max doesnât release its metrics, then what metrics are you actually using to make these statements about the showâs popularity? what does âitâs currently in the 99.7th percentile of the comedy genre, meaning itâs in higher demand than 99.7% of all comedy series in the u.s.â even mean? How are you measuring whatâs âin demandâ - by who? Where? Itâs bold to claim that this show was wildly popular (despite the fact that I never hear about it outside of tumblr, tho thatâs a personal anecdote) but cancelled just for being queer, so I would be really interested to know where youâre getting all these numbers from. Thanks!
hey anon! first of all i am so sorry for the delayed response. i started typing something up and then i got distracted with something else and totally forgot about this in my drafts.
sure, i have no problem citing sources. i probably shouldâve linked some in my original post, thatâs absolutely fair.
this ended up way longer than i planned so bear with me, but a quick overview of what iâll be going over:
1) what are the stats/where did they come from?
2) how is the show so popular?
3) was it really cancelled for being queer?
(also just a disclaimer that this will contain spoilers for the show)
1) first, the numbers
youâre right that hbo doesnât release metrics to the public. in fact, ceo casey bloys tried to justify the cancellation to the hollywood reporter by saying âthe numbers werenât there,â despite refusing to say what exactly those numbers were or where they came from.
however, there are websites dedicated to researching/analyzing the data of different media. one of those is parrot analytics, who focus on industry insights like audience demand, competitive analysis, and content valuations. theyâre trusted as a reliable source by forbes, the new york times, reuters, the wall street journal, and more.
this is what we can learn from them about our flag means death from a basic google search (note that all of this data is relevant to the last 30 days as of january 26 2024):
audience demand for our flag means death is now 33.6x greater than the average tv series in the united states. as explained in the âabout demand distributionâ section, this means itâs one of only 0.2% of all u.s. shows to fall in the âexceptionalâ performance range compared to the âaverageâ demand benchmark of 64.1%.
the change in demand for ofmd in the u.s. has increased by 7.5% compared to the average tv series.
ofmd now actually ranks at the 99.8th percentile in the comedy genre in the u.s. iâm not a math person, but in basic terms, this is like a scale of measuring and comparing performances to create an average score. essentially, ofmd is performing at the very top of all comedy series in the u.s.
ofmd has 100% home market travelability. as it says above, the market of origin is always 100%, so in fairness i included the graph of international markets for comparison. some of these aren't super high, however (as explained by parrot analytics themselves) one of the key issues with the international market is accessibility to content, which has been an ongoing struggle for international fans. many people (i guess fittingly) have resorted to pirating ofmd because they donât have access to max or affiliate streaming services in their country.
there are more stats i could have and wanted to go more in-depth into but it would make this even longer than it already is, so iâll just leave some links you can check out if youâre interested and move on:
⢠comparison of ofmd's success to shows like ted lasso, euphoria, and peacemaker
⢠ofmd's placement as #1 most in-demand breakout series in the u.s. for 8 weeks
⢠ofmd's impressive 94% critics score and 95% audience score on rotten tomatoes
⢠how ofmd evolved from sleeper hit to a flagship series at max
⢠a list of ofmd's past and present award nominations/wins
⢠praise and recognition from news/entertainment sites: the atlantic (2022); the new york times (2023); tv guide (2023); vulture (2023); forbes (2023); the los angeles times (2022); vanity fair (2023)
2) so why havenât you (or others) really heard of the show outside of tumblr despite all this success? likely because max did a terrible job marketing it.
ofmd first aired on hbo max (pre-merger before it was âmaxâ) in march 2022. the entire season aired over one month, every thursday at 12am pst. season 2 followed a similar release schedule in october 2023.
season 1âs marketing was almost non-existent, pretty much relying on taika waititiâs name being attached. there was one teaser and one full-length trailer, as well as a few clips on youtube of taika and rhys darby answering pirate-themed trivia, all painting the show as a âsilly pirate workplace/buddy comedy.â but hbo max didnât put any real effort in because they didnât care. david zaslav and the other higher-ups had no faith in the show and expected it to fail.
most people werenât aware it was actually a romance due to the poor marketing, and although there were many romantically charged scenes between them, many were still wary to believe it wasnât queerbaiting until ed & stede confessed their feelings and kissed.
showrunner david jenkins has said in interviews that he had no idea how deeply queerbaiting had hurt audiences and impacted their ability to trust whatâs on screen without feeling like theyâre being ridiculed, despite the fact that he was calling it a love story the whole time. it wasnât until people realized they werenât being queerbaited and that it was a funny, sincere show with a compelling plot that word-of-mouth began to spread. by the time the season 1 finale aired, there was a decent-sized fandom that continued to grow as it received more praise.
it was a fight to even get the show renewed for season 2, and david jenkins and the cast have majorly credited that renewal to the unexpected and massive fan response to the show, which basically forced hboâs hand.
max didnât bother trying to properly promote the series until season 2, when they begrudgingly accepted that it was one of their most profitable and successful shows. ofmd had huge billboards in times square, downtown los angeles, and on the side of hbo headquarters. they started accurately marketing the show as not just a workplace comedy at sea, but a heartfelt romcom. max began selling long-demanded merch (which became best sellers) and spent money on an FYC campaign.
i will emphasize, whether they liked it or not, they knew ofmd was their new moneymaker (especially with the recent end of succession, which was obviously a cash cow for hbo).
photo credit: @/bookishtheo
3) now if it was that successful, was it really cancelled just for being queer?
i mean, i canât say that definitively. no one can. there are several potential factors at play that we may never know, and there have been a lot of rumours and speculation (many of which i donât feel comfortable discussing in case they arenât true) since the cancellation.
but do i believe the fact that itâs a queer romcom was one of those factors, especially since max has a history of cancelling and scrapping its most diverse projects? absolutely.
first and foremost, i canât stress enough that this isnât just a show with a few characters thrown in for token representation. ofmd is built on a diverse, intersectional cast and narratives, including:
⢠lgbtq+ representation: 5 main couples are explicitly queer (including mlm, wlw, nblm, and nblw relationships). multiple characters are implied to be poly, and thereâs a polycule forming in season 2 that was hinted to be developed more in season 3. beyond relationships, itâs confirmed that (similarly to the way wwdits depicts all vampires as being pansexual) all of the pirates are somewhere on the queer spectrum.
⢠bipoc representation: the majority of the main cast are people of colour. this includes david fane, joel fry, leslie jones, samson kayo, vico ortiz, anapela polataivao, madeleine sami, samba schutte, ruibo qian, and taika waititi, as well as many guest actors (like rachel house, simone kessell, and maaka pohatu) and extras.
⢠disability representation: multiple characters have physical disabilities, most notably amputated/prosthetic limbs and visual impairment. a lot (actually most) of the characters also deal with mental health issues, particularly coping with severe trauma and suicidal ideation/behaviour.
⢠the show has been praised for addressing difficult and serious themes like toxic masculinity, colonialism, and self-discovery, all while still managing to be a witty comedy and not come across as âpreachy.â
⢠the diversity also extends off-screen, with a team of directors, writers, and additional crew comprised of numerous bipoc, women, queer people, and trans/non-binary people.
my point isnât just the quantity of representation, but the quality. they take great care and respect into every marginalized group depicted on-screen. the actors would often be consulted about their charactersâ costumes, hair, tattoos, and the kind of language they use. itâs not a world where discrimination magically doesnât exist, they just have zero tolerance for it. if a character does something homophobic or racist, you can guarantee theyâll quickly (and often violently) be punished.
so okay, sure, itâs got great representation. what does that have to do max cancelling it?
because theyâve been interfering with production from the start.
i already mentioned the marketing issues so i wonât get into that. it was also revealed in interviews with david jenkins after season 2 that hbo cut their budget by 40%, which is why they had to do everything they could to save money. this included letting go of some of the original cast (and even still having episodes where some of them donât appear at all) and moving the entire production to AoNZ. the budget cuts also meant two less episodes, so they had to rush to fit an entire seasonâs worth of plot into eight half-hour long episodes.
but one of the biggest frustrations is hboâs (alleged) censorship of the show. samba schutte revealed that the entire plot of episode 2x06 was completely different in the original script. before it was rewritten as âcalypsoâs birthday,â the episode took place during lucius & peteâs wedding and focused on the crew getting sick of the sexual tension between ed & stede and trying to get them to hook up (this aligned with lucius & pete getting engaged and ed & stede deciding to take things slow in the previous episode).
vico ortiz and writer jes tom have also commented that many scenes between jim, oluwande, and archie establishing them as a polycule were cut, including one of the three of them emerging from a bedroom in their underwear. jes has mentioned other elements of season 2 that had to be cut out or rewritten, like the implication of other poly dynamics between the crew and more sexually explicit scenarios and jokes.
considering that ofmd is an extremely sex-positive show that isnât afraid to be raunchy or taboo, itâs clear that either higher-ups at hbo forced them to cut these things out or they had no choice but to cut them out due to tight budget/time restraints.
in addition to this, a recent article citing an âanonymous insiderâ has alleged that hbo was uncomfortable with and was unsure how to market the âshock violenceâ in the show (the same network that aired game of thrones), which david jenkins outright called out as being bullshit. ofmd is rated TV-MA and the posters and trailers all show the audience that it contains violent content. there is literally nothing more graphic in ofmd than any other pirate show â itâs probably a lot tamer than most of them, actually.
violence on the show is most frequently used in a comedic context, in the sense that itâs not meant to be seen as scary or taken seriously. the few instances of serious graphic imagery on the show are meant to invoke a mood shift, like edâs transformation into the kraken or ned lowâs murder. it should also be noted that some of the most graphic deaths are reserved for bigots, like ed snapping the neck of a colonizer who was ridiculing stedeâs love letter.
itâs also most often used in a sexual context â not sexual violence, but violence as a sexual metaphor. more specifically the act of stabbing as a metaphor for penetration, as seen with both ed & stede and anne & mary. bearing all this in mind, it seems like the real issue here isnât executives struggling to market explicit violence to a mainstream audience, but rather explicit gay content.
as much as we joke and affectionately call it the âgay pirate show,â ofmd has always been nothing more than an opportunity for rainbow capitalism for hbo (e.g. the fact that they waited three months to announce season 2 just so they could do it on the first day of pride month). like other cancelled queer media, ofmd was a way for hbo executives to show how âinclusiveâ and âacceptingâ they are when it was convenient (aka profitable) for them, but they never actually respected the show or us as a community.
itâs impossible to be certain of what the exact reasoning for cancellation was, especially when they wonât give us a clear answer themselves. and maybe it had nothing to do with ofmd being a queer romcom at all. maybe thatâs all a horrible coincidence. but for hbo/max to axe a critically acclaimed and beautifully inclusive show thatâs successful by every metric, with an extremely devoted fanbase, especially after casey bloys just had the nerve to ask âgay twitterâ to hype up the gilded age? it doesnât exactly put them in the best light regardless.
in summary, iâll leave you with this editorial, which details how the campaign to save ofmd isnât just about one show, but a fight to save the future of all queer art.
137 notes
¡
View notes
Note
So, I was gathering reference for a map making project and when looking at the Vale World of Remnant video got reminded of this map and was curious if you had any thoughts on things:
This was its follow up:
i think about those red settlements all the time. (i have another post about this somewhere but itâs lost in the tumblr abyss. rip.) now, first thing, the placement of these settlements is clearly vibes-basedâwe know there are no settlements in the menagerian interior and thereâs no green markers where the city of vale should be, etcâso their distribution is probably meant to give more of a general sense of where people tend to be in a world populated mainly by grimm, rather than the exact location of specific settlements. in any case, whatâs up with quadling country those settlements on the âuninhabitedâ continent where salem lives?
third data point, from the great war spot:
which implies that battles were fought across the southern peninsula of the âuninhabitedâ continent during the great war; this does not make sense to me unless there were people there, so my thinking is that these red settlements must have existed at the time and their obliteration occurred during the course of the great war, which tracks with qrow noting many settlements were permanently lost due to the fighting and the grimm.
but that leaves the questions of 1. who were these people, and 2. why hasnât anyone ever mentioned them?âbecause if this was an entire kingdom that got annihilated during the great war, youâd think that would merit at least a footnote in the great war episode. the two explanations that i can think of are:
#1, the red settlements represent a now-defunct faunus kingdom that is receiving the same treatment in history books that menagerie receives politically in the presentânamely, the âfour kingdoms⌠(and menagerie)â thing where the faunus kingdom is not only completely marginalized on the international stage but humans barely even acknowledge that it exists.
this would also make menagerie a much harder kick in the face, if the faunus already had a kingdomâa much larger one, even!âthat was razed to the ground, ended up impossible to reclaim from the grimm, and got unceremoniously erased by human historians while the human kingdoms were like âstop complaining, we gave you an island!â about it.
however, iâd think this would be a bit of history blake knew, and thereâs an obvious place in V4 for her to have brought it up because she talks about the history of the great war in relation to menagerie!
or,
#2, the red settlements were founded by vacuans displaced by the conquest and occupation of vacuo. this conveniently explains why theyâve never really been explained, if they were separate from vacuo by virtue of not being under mistrali occupation but also still, in a way, considered âpartâ of vacuo because the people living their were vacuanâthe settlements were destroyed or abandoned after the war, sure, but in the end the vacuans who lived there got to go home and have their own kingdom back. i think this is also more in keeping with the decolonial project of the vytal accords, with imperial territories being liberated and so forth. and you still get the whitewashing-history vibe of well, sure these vacuans lost their homes but they got a kingdom in the end soâŚ
<- this would make the vacuo arc the natural place for it to come up in the story proper, because the vacuans would remember even if the rest of the world didnât.
either way, the fact that the settlements are 1. located on the continent where salem presumably lives, and 2. color-coded red definitely implies some connection between her and themâwhether thatâs a legitimate connection or just ozma making an assumption that people living in her proximity must secretly be under her thumb remains to be seen, but itâs interesting. i do think itâs quite unlikely that salem herself is the one solely responsible for their obliteration, because the great war episode does show that there was fighting in that region; if she did attack them she did so under the cover of a level of destructive warfare that would have wiped them off the map anyway.
iâm also very interested in the menagerian-lime settlements on animaâitâs always been my assumption that thatâs where the WF is headquartered and the idea of faunus-majority settlements in southern anima, protected by menagerie, compels me.
#i assume all the valean settlements on the east coast are meant to represent like#small villages because thereâs plainly not much there in the show proper -#but ficwise the apparent density in the east is why i have Eastern Vale at all :3c#(similarly in the fic the uninhabited continent - alukah - isnât uninhabited its just#predominantly populated by nomadic peoples so the sedentary kingdoms act like it doesnât count)
39 notes
¡
View notes