Text
BLOG POST #8 - Due (04/10)
1. How does 2016 “Black Twitter” compare to black digital spaces today?
“Black Twitter” in 2016 loosely connected online cultural force. It helped amplify movements like #BlackLivesMatter, challenged media narratives, and created viral content rooted in shared cultural experiences. It was known for being fast, funny, politically sharp, and capable of sparking real-world conversations. Prof. Latoya Lee writes that Black Twitter pushed forth a “consciousness [which] encourages pride in blackness and stimulates communal responsibility among all people of color for one another and for the purpose of challenging implicit and explicit racial bias” (Lee, 7). Although there is more diversity among platforms in terms of black representation, there is far more censorship, algorithmic bias, and a renewed lack of spaces aimed to spread awareness on environmental issues.
2. What are ways people can organize socially today?
One form of technology that makes social organization possible are the use of hashtags, trends, and threads. Although they are not always political, they can be employed to further a political cause. Servers on streamings sites like Twitch and Discord can also allow for the ability of people to group together upon shared interests and hobbies. There are also pop up events where owners or coalitions can table people and get their name (or name of their causes) out. In the case of the #NODAPL movement, word spread as “evidenced through live video streaming, social media campaigns, Facebook check-ins, crowdsource funding, and posts to social media or blogs to share personal experiences and disseminate information” (Deschine Parkhurst, 38). Today, social media can be used as a tool (either positively or negatively) to push together like-minded individuals.
3. How are Indigenous peoples misrepresented in contemporary media?
Indigenous people are often misrepresented through stereotypes and inaccurate representations of their culture, traditions, and activities. The media also fails to educate the masses that we are on stolen land, and continues to dispossess Indigenous people of their land and practices.
4 How did the #NODAPL movement disrupt physical spaces?
Our guest speaker Nichole Deschine Parkhurst shed light on the No Dakota Access Pipeline movement and how the Standing Rock Sioux Tribe disrupted spaces through protest and occupations which blocked access to the pipelines, confrontations with police and banks, and gained national attention after a news reporter was maced in the face. “The #NoDAPL movement drew widespread intertribal, national, and international support and solidarity from organizations and various municipalities” (Deschine Parkhurst, 37). This widespread media coverage of the Standing Rock Sioux Tribe and the #NoDAPL movement further disrupted, as it encouraged other tribes to stand in alliance or protest along the pipeline path for their own causes and resistances.
References
Deschine Parkhurst, Nicholet A. Indigenous Peoples Rise up : The Global Ascendency of Social Media Activism. New Brunswick, Rutgers University Press, 2021.
Lee, Latoya. “Black Twitter: A Response to Bias in Mainstream Media.” Social Sciences, vol. 6, no. 1, 5 Mar. 2017, p. 26, www.mdpi.com/2076-0760/6/1/26, https://doi.org/10.3390/socsci6010026.
1 note
·
View note
Text
Blog post #7 - due (3/20)
How does the anonymity of interactions in cyberspaces contribute to the prevalence of hate crimes online?
“Anonymity allows trolls to engage in behaviors they would never replicate in professional or otherwise public settings,” because such behaviors are deemed “socially unacceptable, or because the trolls’ online persona would clash with their offline circumstances” (Phillips, p. 84). Phillips also makes a great point, that “successful trolling depends on the target’s lack of anonymity” (p. 85). Anonymity of interactions among users in cyberspaces allows users to act without consequences, or fear of consequences. It results in a sense of detachment from the real world and allows people to feel emboldened to express negative or hateful feelings, thoughts, or phrases that they may not otherwise express in person.
2. What are some psychological effects of online trolling and how do they cause real-world implications?
Trolling may result in anxiety, depression, lowered self-esteem, or isolation. Those who experience hate crimes, hate speech, or trolling online may experience emotional distress. This type of online bullying may cause younger, naive individuals to inflict harm onto themselves or other people in their lives. Aside from psychological effects, trolling and bullying online have real-world implications on future professions, as D.K. Citron details a woman who was cyberbullied and doxxed online and falsely “exposed” as someone who spread hate-speech and encouraged unsafe sex, she feared this “false” digital footprint would later catch up to her, and “she worried that future employers might not be as understanding” (Citron, 2014, p. 2). It is very easy for people to make misleading stories about you, and even easier for others to believe them. Such instances can cost a person their livelihoods, or even their lives.
3. What is the importance of digital literacy and how can it prevent the spread of hate crimes?
It is very important to practice digital literacy, as it can help vulnerable individuals navigate the digital sphere safely and responsibly. Digital literacy may also help us avoid scams, hackers, or the doxxing of our personal information. By practicing digital literacy, users may also be able to utilize these online platforms in positive ways, such as community building, engaging in critical and thought-provoking conversations, and educating themselves on current events.
4. How can we differentiate between satire, hate speech, and trolling?
Satire is a form of humor, often used to “rage-bait,” garner followers, or even get a conversation started to address satirical absurdities. Hate speech usually involves the inciting of violence, discrimination, or hate towards certain groups of people. It is important to acknowledge that while satire is often comical, its main intention is provoking conversations or critiques, whilst hate speech is used to directly harm another person or group. As stated in my previous prompt, practicing safe online habits and taking steps to become digitally literate can help mitigate the confusion between satire and hate speech/trolling.
References
Citron, D.K. (2014). Hate crimes in cyberspace. Harvard University Press.
Phillips, W. (2016). This is why we can’t have nice things: mapping the relationship between online trolling and mainstream culture. The MIT Press.
4 notes
·
View notes
Text
BLOG POST #6 - Due (3/13)
1. In what ways do cyborgs in cyberspace blur the lines between human and machine?
There are many ways that cyborgs in cyberspaces “blur” the lines between human and machines. For one, various technologies have been advanced to create tech such as artificial intelligence, virtual reality games, and brain-computer interfaces. We also have many developments such as implants or prothestics which are utilized to improve a human’s quality of life. Such advances may be seen as beneficial to human life, although “it is not clear who makes and who is made in the relation between human and machine. It is not clear what is mind and what body in machines that resolve into coding practices” (Haraway, p. 356-357).
2. How is heteronormative masculinity reinforced or challenged in online communities?
Cyberspaces such as gaming and social media promote traditional ideas of masculinity, such as aggression, dominance, and competition. These spaces reinforce hegemonic ideals of masculinity, however, are continuously challenged through practices of feminity and inclusive communities that aim to breakaway from hegemonic norms.
3. In what ways does the performance of femininity in cyberspace challenge or reinforce patriarchal structures?
In terms of challenging, femininity is exemplified through individual who will to assert their autonomy and gender roles. Feminists tend to challenge traditional notions such as ideas that women are submissive, meet a eurocentric beauty standard, and are often sexualized through cyberspaces that uphold patriarchal values. On the opposite end, feminity may reinforce patriarchal structures through the use of various social media sites (e.g., TikTok, Instagram, Onlyfans) that pressure women to conform to idealized versions of feminity.
4. How do video games and social media platforms perpetuate the ideals of hypermasculinity or toxic masculinity?
One major way that video games perpetuate ideals of toxic masculinity are through the heavy masculinization of protagonists in first-person-shooter (FPS) games. These main characters are often portrayed as rugged, aggression, and disconnected from humanity. When picking screen names “men tend to choose screen names that refer to and honor heroes and martyrs of the movement, while women mostly do not.” (Daniels, p. 64). Not only do male players attributed their characters to masculine martyrs, they partake in toxic habits, such as bullying, harassment, and using slurs in chatrooms and voice chat.
4 notes
·
View notes
Text
Blog #5 - Due 3/6
1. Why is Colorblindness harmful, especially in online spaces?
Colorblindness promotes the idea that people can not “see color” and, therefore, the basis of treatment for people has nothing to do with race. This ideology in and of itself is harmful because it ignores the factors that play into systemic racism. In online spaces, it is dangerous because it dismisses people’s lived experiences and makes it harder to advocate for themselves on a digital scale.
2. Why do many white folks often refute white privilege?
White people often dismiss their white privilege due to a lack of awareness, discomfort with admitting privilege, and societal factors that reinforce the idea that we live based on a meritocracy.
3. Why is race considered a form of technology?
Technology isn't just electronic, but it also involves the ways we categorize and organize information that impact our society. Race is not a biological fact but a social construction that affects how people interact with one another. Race influences various aspects of life, including how individuals are treated in areas related to the economy, politics, and social interactions. Race can be considered a form of technology because it can shape outcomes in powerful ways, often without us even realizing it. For instance, systems like law enforcement, schools, and healthcare can perpetuate unfair treatment based on race, keeping certain groups at a disadvantage. In this way, race serves as a tool that helps maintain existing social inequalities.
4. What is tokenism, and how is it harmful in online spaces?
Tokenism involves the inclusion of individuals from marginalized groups to prove or display “diversity” when none truly exists. According to R. Benjamin, “tokenism is not simply a distraction from systemic domination. Black celebrities are sometimes recruited to be the (Black) face of technologies that have the potential to deepen racial inequalities” (Benjamin, 2019, p. 55). Tokenism, especially in digital spaces, devalues the contributions of marginalized people, perpetuates a superficial and fake form of diversity, and reinforces stereotypes by mitigating or minimizing groups of people to one single symbol.
References
Benjamin, R. (2019). Race after technology: Abolitionist Tools for the New Jim Code. Polity.
4 notes
·
View notes
Text
Blog Post #4 - Due (2/27)
How do contemporary cyberspace studies ignore the issues regarding race?
In many contemporary cyberspace studies, researchers see the internet as a neutral tool that has transcended cultural and racial boundaries over time. However, this neutral regard for technology fails to acknowledge the inequities that exist in cyberspace, which are responsible for affecting those that are marginalized by institutional racism, power dynamics, and history.
What are some of the ways that invisible cyberspace identities can promote harmful stereotypes of race?
There are multitudes of ways in which individuals can engage in racism behind a screen. Racist practices such as blackface and yellow-face are utilized to assume the identity of a person of color and promote negative feelings to others under a disguise. “Ow’s essay describes the racial politics and modes of representation that are enacted in the everyday practice of contemporary video games in particular Ow analyzes the game Shadow Warrior and argues that the game forces the player to occupy a racist violent and colonist subject position” (Kolko et. al., 2013, p. 10). Many races are also subject to exoticization, fantasization, and misrepresentation. Some issues also include trolling and harassment in the form of doxing, bullying, and the use of slurs, stereotypes, and negative conversations that take place through screens.
How do cases such as the Yellow-faced cyborg relate to technological issues that perpetuate a malleable Asian identity?
The case of the yellow faced cyborg ties to the practice of “yellowface” which includes adopting or pushing exaggerated portrayals of stereotypes of East Asian individuals. In many cyberspaces (online spaces) racial identities are easily shaped or distorted (i.e. blackface). In many spaces, Asian identities are often appropriated or altered, similarly to Jeffrey A. Ow’s example of Lo Wang, a character who is not necessarily Chinese nor Japanese, but is a ninja that eats fortune cookies to regain health. Not only are these characterizations of Asian characters false/culturally incorrect, but they are normalized and often utilized for profit and gain. In the media, Asian women are seen as submissive and subservient, and are often hypersexualized. We see instances of hypersexualization of Asian women in cyberspaces of K-pop, gaming, music, anime, and more.
What are some ways that people can stay safe when using cyberspaces?
Although there is a slim chance we can guarantee our security and the security of others, we can safeguard our identities and technology through the use of strong passwords, using two-factor authentication, acknowledging and searching for scams, updating software apps, using vpns or antivirus programs (such as an ad blocker), and be very careful of the information we share online. During class on Wednesday (2/26), we discussed these techniques to safeguard ourselves and our loved ones, and we discussed the importance of being safe with technology.
References
Kolko, Beth, et al. Race in Cyberspace. Hoboken, Taylor and Francis, 2013.
4 notes
·
View notes
Text
Blog post #3 - due 2/13
1. How can digital communities empower women, people of color, and other marginalized groups to create alternative narratives and reshape their online identities outside of mainstream ideologies?
Digital communities can potentially provide spaces for marginalized communities (i.e., women, people of color, etc) to control their narratives. The ability to create content and hold spaces primarily/explicitly for their communities allows individuals to reclaim their identities and narrate their own stories. For example, social media platforms allow individuals to tell stories–through the use of blogs, podcasts, campaigns, and video productions. Some examples of how digital communities have uplifted marginalized groups and allowed them to share their stories are the #MeToo movement and #BlackLivesMatter moment.
2. How do dominant tech companies influence access to digital resources, and how does this control contribute to the widening digital divide among marginalized communities?
Dominant tech companies (META, Google, Amazon, Apple) have had immense influence on people’s access to digital resources by collectively controlling the platforms that many individuals rely on in terms of communication among one another, education on current events, employment opportunities, and just overall socialization. One relevant example of this is Elon Musk’s usage of his social media platform and power to communicate transphobic messages to users on X. The digital divide used to regard the lack of access to technology that non-whites and non-elites had. However, Anna Everett describes it as a “newer virtual or cybernationalism” that is “now unbound by traditional ideological, political, economic, geographical, and even temporal boundaries and limitations” that is dismissive of “African American early adoption of and early involvement with prior innovative media technologies” (Everett, 2009, p. 138-139). For marginalized communities, such as the Black technophiles, which paved the way for communication in technology, they are often dismissed and unrecognized for their contributions (both current and present). Everett encourages the idea that cybernationalism may unintentionally frame the internet and new media as a "universal" space that ignores the specific ways marginalized groups, particularly African Americans, have shaped, engaged with, and used these tools to resist oppression. The contributions of African Americans in digital spaces are often minimized, even though they have been central to movements like #BlackLivesMatter and other online activism efforts. However, there has been a significant shift in how digital media shapes national and global identities that are no longer limited by economics, geography, and politics.
3. How do intersectional power dynamics influence the types of technologies that are developed, and who benefits from them?
Kimberlé Crenshaw defines intersectionality as a “metaphor for understanding the ways that multiple forms of inequality of disadvantage compound themselves.” Systems of oppression (i.e., racism, classism, sexism, etc) compound and, therefore, significantly influence who and what is allowed to develop technology, as well as which technologies can be created. No technology can be truly free of bias, as the products and forms of technology that an individual creates are a direct reflection of their worldview, values, and needs (what they deem to be of importance). For example, healthcare technology greatly undervalues the needs of marginalized groups, such as the health challenges of Black women and disabled people. A field review by Kadija Ferryma states how, “Racial and ethnic minority groups also seem to be missing from EHR data, which can lead to bias if these data are used in digital health applications.”
4. How can tech companies and digital communities redesign their platforms to better address the intersectional needs of diverse users, and what steps must be taken to ensure these changes go beyond tokenism?
I believe tech companies can improve their platforms to better address the intersectional needs of users by engaging in inclusive design processes–which ensure that people from diverse backgrounds are a part of the development process, making sure algorithms are fair and transparent (especially with blatantly biased technology, such as the issues with facial recognition AI), and redesign platforms to be accessible for those with disabilities–which could be implemented through the use of features such as screen-reading and captions. To ensure they go beyond tokenism, companies should allow for the voices of marginalized communities to be heard and allow them to be notified of the conversations held regarding development processes.
References
Everett, A. (2009). Digital Diaspora. State University of New York Press.
Ferryman, K. (2022, March 1). Framing Inequity in Health Technology: The Digital Divide, Data Bias, and Racialization. MediaWell, Social Science Research Council.https://just-tech.ssrc.org/field-reviews/framing-inequity-in-health-technology-the-digital-divide-data-bias-and-racialization/
National Association of Independent Schools. (2018, June 22). Kimberlé Crenshaw: What is Intersectionality? Www.youtube.com. https://youtu.be/ViDtnfQ9FHc
1 note
·
View note
Text
Blog Post #2 - due 2/6/25
1. How do digital security measures reinforce existing power structures, particularly in terms of class, race, and access to resources? Although we are typically unphased by the prevalence of security cameras and data-collection systems that we come in contact with on a day-to-day basis, we don’t realize how much more prevalent they are in low-income neighborhoods, as crime is much more likely to be reported there. One fact that stuck out to me was how digital security guards are “so deeply woven into the fabric of social life that, most of the time, we don’t even notice we are being watched and analyzed” (V. Eubanks, 2018, p.16). In my own experience, I have worked a few sales jobs where we have only about 1 or 2 cameras that surveillance the exterior of the shop, where customers frequent. However, when I need to drop cash in the safe located in the office, I am usually overwhelmed by the number of cameras on the monitor, which monitors every corner of the employees' workspace. There are about 5 separate cameras that locate various angles of one single space. For one, this heightened surveillance serves as a tool to monitor productivity and compliance with policies, but also to reinforce power imbalances between employers and their employees. These cameras may also be used to target and monitor specific racial groups, as an employer may monitor a Black or Latino worker far more than a White or Asian worker holding the same job position/status.
2. Nicole Brown poses a significant question: “Do we really understand the far-reaching implications of algorithms, specifically related to anti-Black racism, social justice, and institutionalized surveillance and policing?” (Brown, 0:14). The answer is, in many ways, complex. However, Brown brings up a very important point. Many algorithms are trained with the potential to improve many areas of our lives, however, they can prove damaging in terms of predictive policing as well as perpetuating biases and inequalities. According to Christina Swarns in an article titled “When Artificial Intelligence Gets it Wrong,” “facial recognition software is significantly less reliable for Black and Asian people, who, according to a study by the National Institute of Standards and Technology, were 10 to 100 times more likely to be misidentified than white people,” further emphasizing how algorithms may lead to false identifications, caused by a lack in diversity–highlighting the need for improvements in algorithmic technology to mitigate the harm caused for marginalized communities. In regards to predictive policing, algorithms that are trained to predict crimes utilize historical crime data, which may result in higher policing rates for those areas, when the historical data may just reflect biased policing as opposed to true criminal activity.
3. How do surveillance and algorithms affect healthcare outcomes for minorities? In a video titled “Race and Technology,” Nicole Brown explains that since White people are recorded to make up a majority of healthcare consumers, the healthcare system’s algorithm deems White individuals more likely to require healthcare than their non-White counterparts (Brown, 2:12). Although we are typically used to having our information and activity utilized by certain social media platforms to generate user-centered content, I think the connection between algorithms and healthcare outcomes is an interesting topic to unpack, as I never have thought about this connection. “Doctors and other health care providers are increasingly using healthcare algorithms (a computation, often based on statistical or mathematical models, that helps medical practitioners make diagnoses and decisions for treatments” (Colón-Rodríguez, 2023). Colón-Rodríguez uses a case study of a woman who gave birth via c-section in 2017, and how the database was later updated to reflect a false prediction that Black/African American and Hispanic/Latino women were more likely to need c-sections, and were less likely to naturally give birth successfully as opposed to White women. For one, this prediction was false, and it further caused doctors to perform more c-sections on Latino and Black women than White women. C-sections are known to be generally safe but are known to cause infections, blood clots, emotional difficulties, and more. This case study reflects how healthcare databases will often profile individuals based on race and may make generally false predictions which oftentimes result in unnecessary–and sometimes–life-threatening outcomes for minorities (in this case, minorities with vaginas).
4. In what ways does the normalization of surveillance threaten democratic values like free speech, freedom of assembly, and the right to privacy? Similar to feelings of surveillance that I expressed to question 1, the normalization of such surveillance in many aspects of our society may cause instances of self-censorship, suppression of dissent or negative feelings towards individuals of higher status, and the exploitation and misuse of personal data. Workers in a workplace may censor the topics they speak about for fear of customers or employers hearing such issues. People may also censor themselves when in a crowd, where phones may be utilized to monitor activity. Fear of reprisal may cause individuals to refrain from speaking out about injustices and dissent for people, policies, or events. The misuse of personal data threatens our right to privacy because we as consumers are unaware of what exactly is being utilized–and even if we do, we are not made aware of how long such data is being held and utilized.
References
Brown, N. (2020, September 18). Race and Technology. YouTube. https://www.youtube.com/watch?v=d8uiAjigKy8
Colón-Rodríguez, C. (2023, July 12). Shedding Light on Healthcare Algorithmic and Artificial Intelligence Bias | Office of Minority Health. Minorityhealth.hhs.gov. https://minorityhealth.hhs.gov/news/shedding-light-healthcare-algorithmic-and-artificial-intelligence-bias
Eubanks, V. (2018). Automating Inequality: how high-tech tools profile, police, and punish the poor. St. Martin’s Press.
Swarns, C. (2023, September 19). When Artificial Intelligence Gets It Wrong. Innocence Project. https://innocenceproject.org/when-artificial-intelligence-gets-it-wrong/
3 notes
·
View notes
Text
BLOG POST #1
STORY TIME: Tell us about a time when technology didn't live up (to the hype) to its promises.
Technology has come a long way, but let this be a reminder that it can malfunction at any given time—attributing to our disappointment. When asked to tell the class about a time when technology didn’t live up to its “hype,” I am forced to reminisce on my days in lockdown, when I used technology to fill the void of connection.
Although embarrassed to admit it, I used apps such as Monkey and Yubo to connect with people my age. For those of you who do not know, Yubo was an app created to allow teenagers (marketed for ages 13-17) to expand their social circles online with friends worldwide. It was primarily marketed to Gen Z and had age restrictions (which used your ID to validate your age), community guidelines, and prohibitions on pornographic, sexually explicit, or violent content.
Despite these strict terms and guidelines, many users slipped through the cracks. For one, people aged 18 and over would utilize the app, pretending to be younger and preying on naive children seeking romantic attention. Other than censoring swear words, there were also no moderators who could regulate age ranges��meaning, teens aged 15 and older could direct-message children who were 12-13. I was 14 years old at the time, but that did not prevent individuals 3 years older than me from “swiping right” and messaging me. At the time, the attention was addicting, and now I look back and realize how wrong that was, and how I wish I would have found another outlet for social engagement. These older men would send unsolicited messages and pictures to me, and that is a major reason why I deleted the app. I think this is a good story for the prompt, as most of my peers probably imagine innocent scenarios where sites crash, friends are butt-dialed, or the wrong message is sent. It’s important to acknowledge that there is a dark side to social media and technology, and although there were probably good intentions behind the creation of such apps, it is inevitable (oftentimes preventable) that something intended for good falls into the wrong hands. Apps such as “Yubo” do not live up to their promises, because individuals take advantage of loopholes (or a lack of) within the guidelines for their own benefit.
Safe to say, I was not satisfied with my experiences on this app, and will always be weary of people online, especially on sites such as Instagram and TikTok, where it is so easy for people to deceive and manipulate others.
2 notes
·
View notes