Text
Blog Post #7! Due 3/20/25
How can the internet being used as a form of organization be positive or negative?
The internet and its capacity to be used as a means of organization comes with the price of there being no true limits to who can use its power of organization. Many positive movements meant to uplift communities and give attention to real-world issues have gotten their feet off the ground through the internet and social media. At the same time, there is also a need to recognize that these very same platforms used for support can and have been used to spread hate, harassment, and discrimination. As was discussed in the readings for this week, the internet can easily be used as a tool for those looking for social change or those looking to spread harmful rhetoric (such as in the example of white supremacist forums). Recognizing that the internet is not a neutral party and is instead a host for a variety of opinions and viewpoints (both harmful and positive) is one step in recognizing the power of online organizations. These two cases are examples of people/organizations who have used the internet in their favor, for harm or for good.
Is the internet truly its own sphere separate from the “public”? why is it that people try to separate the internet from lived reality?
While some people think that the internet and what goes on there is not correlated to real life, this couldn’t be further from the truth. I don’t think this sentiment is as common now that we live in an era where we heavily rely on the internet, but I still see some people act as if their actions on the internet do not have real world consequences. I think that in our modern day, it is impossible to call the internet a separate “sphere”; we bring all of our experiences from our real lives into how we use the internet, and vice versa. While some people do a better job at separating their online life from their real life, it is impossible to navigate the internet with a completely different viewpoint than the one you have in your day to day. When it comes to people trying to keep the internet separate from lived reality, I think it has something to do with a sort of control. While real life has structured laws and social rules, the internet in comparison is much less rigid. In trying to consider the internet as “separate”, I think people find escapism in this way.
Are there such a thing as "out of body" communities on the internet?
I don’t think any community exists only in an online space; every group will have at least some real-world organization. While the internet provides a means for communities to form across geographic/physical boundaries, this doesn’t mean that it is limited to this online space. The authors for this week discuss more “serious” applications, such as with social justice movements and with white supremacist organizing, but I thought about this in the context of fandom/hobbies. While fandoms are predominantly online, they still manifest in the real world through events such as conventions and meetups. Of course, like with the other examples, the numbers matter in actually being able to make these events come to fruition. However, I believe that you will always be able to find an online community’s real-world equivalent if you just search for it.
Is there a distinction between being a "supporter" and a "member" of online social movements? (Question based off Daniel’s “White Supremacist Social Movements Online and in a Global Context”)
In the reading, Daniels discusses how modern online social movements are more comprised of “supporters” instead of members, resulting in a looser structure. “Members,” as described by Daniels, are a part of a more formal structure of the movement, often financially supporting it (pg. 49). In comparison to “members,” “supporters” are more loosely connected and do not financially support the organization. (p.49). This distinction can determine the amount of active participation within online communities and organizations; just because an online forum or website has a lot of members does not mean all are contributing to conversations or organizations.
Citations:
Daniels, J. (2009). Cyber racism: white supremacy online and the new attack on civil rights. Rowman & Littlefield Publishers.
Elin, L. (2013). The Radicalization of Zeke Spier: How the Internet Contributes to Civic Engagement and New Forms of Social Capital.
3 notes
·
View notes
Text
Blog Post #6! Due 3/13/25
How are we cyborgs in the modern day? Can we opt out of being a cyborg? (Question based on D. Haraway’s A Cyborg Manifesto)
Besides the obvious connection we have to our phones, we are also cyborgs in our connection to technology that we need to access most other things in our lives. Things we need to work and function in our modern world, from college work to bank accounts, are all on our devices, and we risk failure or punishment if we do not have access to them. This has become so normal to me that I even find myself surprised when I get a professor who uses a paper syllabus instead of a digital one. It’s pretty safe to say that it would be impossible to go through university without at least having a laptop. We access our readings and turn in our work there. Every syllabus I’ve had has had access to the internet and a laptop be a requirement. In this way, we can’t opt out of being a cyborg. Even if we try to distance ourselves from technology, its predominance in our lives will always show up, and we are quite reliant on it.
In an age where everyone is a cyborg, is there still a boundary between physical and digital? (Question based on D. Haraway’s A Cyborg Manifesto)
Despite being heavily reliant on technology, there is still a boundary between the digital and physical realities of our lives. Online, we often construct our identities in a separate way than we do in the real world. I think there is still a tendency to act differently in both your realities. However, I think this is more the case for those who prefer to be “anonymous” or present by a different name and remain faceless. I know that I am much different online than in real life. My digital persona is more outspoken in social interactions, while I am very introverted when it comes to face-to-face interaction. I do think that with technologies such as virtual reality slowly becoming more accessible (such as with Meta’s Oculus headset), we are moving towards a reality where people’s physical forms are more linked to their digital ones. Hopefully, we won’t get to that sci-fi future in movies where people are connected to their computers all day, but we are getting pretty close.
What is the idea of a “ghost in the machine”? (Question based on J. Daniels’ Gender, White Supremacy and the Internet)
In this piece, Daniels mentions this idea of a “ghost in the machine,” referencing another scholar who had coined this concept; specifically, they pin this “ghost” as being ideas of race and white supremacy. A “ghost in a machine,” to my understanding, is something that has gone unnoticed or is perceived to not exist but still has an influence on how the “machine,” or the internet and technology, operates. Daniel discusses how the internet supports the “ghost” of race by acting as a ground for the broader distribution of racist ideologies and rhetoric, especially on websites such as Stormfront. (pg. 86).
How are gender roles reflected in “feminine” technology, such as virtual assistants and video game characters? (Question based on K. O’Riordan’s Gender, Technology and Visual Cyberculture)
Gender roles are reflected in “feminine” technologies by showing them as subservient or as some sort of assistant. This plays into the gendered roles of women being seen as caretakers, teachers, and mothers. Female video game characters, even if they are portrayed as strong, like in the example of Lara Croft provided in the article, are still made to fit into the idea that women should be beautiful. Video game women are rarely “ugly” or unconventionally attractive, and when they are, certain male gamers cause an uproar about it. I can recall a specific example of people critiquing the video game character of Aloy from the Horizon series of games, saying that she was unattractive and even editing her face to put makeup on it.
Readings:
Haraway, D. (2006). A Cyborg Manifesto: Science, technology, and socialist-feminism in the late 20th century. In: Weiss, J., Nolan, J., Hunsinger, J., Trifonas, P. (eds) The International Handbook of Virtual Learning Environments. (pp. 354-359).
Daniels, Jessie. (2009). Cyber racism : white supremacy online and the new attack on civil rights. Lanham, Md. Rowman & Littlefield Publishers,
O’Riordan, K. (2006). Chapter 21 Gender, technology, and visual cyberculture: Virtually women. In D. Silver & A. Massanari (Ed.), Critical Cyberculture Studies (pp. 243-254). New York, USA: New York University Press.
4 notes
·
View notes
Text
Blog Post #5! Due 3/6/25
How do perceptions of the internet as a white/neutral space impact discussions of race and privilege? (Question based off T. Senft & S. Noble’s “Race and Social Media”)
Even when the internet is regarded as a “neutral” space, it operates under the assumption that most of its users are white. Due to the opportunities for non-white programmers and developers are few and far between, the internet becomes a reflection of the whiteness that created it. When those with privilege create a space for themselves and their lived reality, they often don’t realize that privilege. As a result, there is no question how aspects may not be broadly appealing/accessible to those who don’t have that privilege. The perception of the internet, with its assumption of whiteness as the “default,” makes discussions of race and white privilege stand out. Despite the prevalence of anti-Black, anti-Asian, anti-Latino, etc., racism, challenging the “default,” it is often brought attention to and labeled as negative. Within this chapter, the authors discuss how when the internet is designed to cater to non-white users, they end up having to explain themselves (pg. 113). Whiteness is so ingrained that when it isn’t there, some people cry “reverse racism.”
Why are some social media spaces labeled based on race, sexuality, etc (For example, “Black Twitter”)? (Question based off ibid.)
From those outside of these communities, I think the assumption of the “white default” does play a role in this. I think most people wouldn’t refer to a tweet or joke from a white male celebrity as something that came from “white Twitter.” Mainly from what I’ve noticed, when something is posted by a non-white, queer, or anyone who doesn’t fit the default “white, cishet man,” there is immediate attention drawn to the fact that they don’t fit into that default. Their race and/or sexuality are used in headlines and discussion/ comments from other users due to their biases. Labeling spaces could be seen as something used to create further exclusion. On social media, I’ve seen a lot of specific communities labeled separately to exclude them, put them in another section entirely, and make it seem like they aren’t part of that particular platform. On the other hand, these spaces being labeled based on these identities provides an easy way for people to form groups and discussions with people similar/the same as them.
Why are short films such as “Nosedive” from Black Mirror so uncomfortable for us to watch? (Question based off Black Mirror’s “Nosedive” and in-class discussion from 3/5/25)
Despite Nosedive being an obvious dramatization of the way our world is, it still reflects many of the systems we take part in every day. It feels reminiscent of social norms and needing to act “socially acceptable.” In real life, we are very restricted by institutions, both social and actual institutions. This episode gives a glimpse of a potential future if we let these institutions gain more control over our lives than they already do. It seems far off, but I can see a system like this easily being put into place. I also found the episode uncomfortable because Lacey acted in completely reasonable ways, but she was punished for it. Realistically, we know that we can’t be perfect all of the time, but this episode still reflects how we are punished for acting “out of line”.
How does capitalism and advertising create an illusion of progress when in reality, not much is changing on a larger scale? (Question based on Benjamin’s “Race After Technology”)
While advertising is often made to be “inclusive” by representing different people who usually aren’t given the spotlight, it doesn’t impact the overall progress towards moving away from inequalities. Benjamin provides the example of Netflix advertising shows by highlighting the Black supporting cast despite their lack of presence in the show. From these ads, people would assume that these cast members are more prominent and have been given lead roles. However, this is not the truth. While these ads provide representation, they ultimately do not change the structures preventing Black actors from being cast in lead roles. The purpose of this is to sell certain shows, products, etc. Another example is Pride Month advertising, which is when many brands switch to rainbow packaging. While it is meant to evoke being an ally, the main goal is to market the product to appeal to LGBTQIA+ audiences.
References:
T. Senft, S. Noble (2015) Race and social media. In J. Hunsinger and T. Senft (Eds.), The Social Media Handbook (pp. 102-125). Routledge
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity.
8 notes
·
View notes
Text
Blog Post #4! Due 2/27/25
Are we ever “raceless” or separate from our lived identities on the internet/ in cyberspace? (Question based on Kolko, Nakamura, and Rodman’s “Race in Cyberspace – an Introduction”)
Despite the thought that the internet provides a certain “anonymity” regarding our real-life identity, we aren’t truly this “cyborg” ideal separate from our lived identities outside the online realm. Kolko discusses how our experiences online are shaped by how we experience race in the real world — “we can’t help but bring our knowledge, experiences, and values with us when we log on” (pg.5). Even if you are completely anonymous and never express any parts of your identity, your lived experiences will impact how you interact online. The nature of the internet allows people to express their thoughts and opinions freely, but something as simple as a word choice could communicate a part of your identity. Kolko additionally discusses Jacobs-Huey’s concept of language communicating “identity and positionality” (pg. 6), which provides another example of how the internet is not without race. How you talk online, through words you use or topics you discuss, can give a hint into your identity outside the computer screen. Going further with this concept, I would say that real-life identity is never absent online when interacting with other users. Even if it is never discussed, others often make assumptions about other users based on how they type. I frequently see this in queer communities, with certain typing styles or quirks being associated with those who identify under the umbrella.
How does the interface of online spaces and/or video games limit levels of participation for those who are queer or non-white? (Question based off ibid., & Jeffery A. Ow’s “The Revenge of the Yellow-Faced Cyborg Terminator”)
Although many social media sites have since grown to have mostly accessible interfaces, issues of access to representation persist in many video games. Social media/communication apps such as Instagram or Discord allow you to input your custom pronouns for other users to see, creating an interface allowing those of diverse gender identities to represent themselves. However, most video games still use the default pick-and-choose of male (he/him pronouns) or female (she/her pronouns). Many have shifted to making character creation a choice between two body types, but this often results in NPCs (non-playable characters) within the game referring to your character with one set of pronouns. A game that comes to mind is Cyberpunk 2077, which presents a choice of body type and voice type. Despite this choice, the character is still ultimately referred to with either set of pronouns, even if you create a “nonbinary” character whose voice does not match their appearance. This limitation makes it difficult for those who identify outside of the gender binary to a) create a character that looks like them or b) be referred to by pronouns that are not considered “default.” The issue of character customization applies to non-white gamers as well, with an example being many games not having an adequate/substantial number of textured hairstyles compared to a wide range of straight/wavy hairstyles. Both of these examples send a message that these kinds of games were designed mainly with one audience in mind—the gender-binary conforming white gamer.
In OW’s article, he discusses how the game’s creators were proud of its ability to ignite conflict and attention. How is this strategy still used today in modern video gaming? (Question based on Jeffery A. Ow’s “The Revenge of the Yellow-Faced Cyborg Terminator”)
Controversy is still commonly used as a promotion tactic for many things, including video games. A pattern I’ve noticed is that video games add features that are so shocking and unnecessary that they warrant conversation online, putting more people’s eyes on the game. Looking at the example of Cyberpunk 2077 once more, part of the pre-release coverage highlighted a curious feature available with character customization: the ability to customize the male playable character’s genitalia. While I wasn’t interested in the game until many years later, I even heard the discussion and controversy around this feature and how ridiculous people found it. Despite it being considered ridiculous, it certainly put attention on the game. Controversy has its draw power and the reasons it is utilized in the media, even if the controversy is the presence of prejudiced depictions such as in the game Ow discusses, Shadow Warrior. In this case, controversy draws out an audience with similar beliefs/ideologies, which was probably the creator’s intended audience in the first place.
Do video games continue the idea technology is “race-less” and unbiased?
I think video games are not subject to the same assumption of being “unbiased.” In some ways, video games can be truly raceless. Ow brings up the example of Pac-Man being a third-person game perspective game (pg.57). Pac-Man avoids including race in its depiction even if you can see the character you play as. However, with first-person games, you don’t know the character. As Ow discusses, this allows the player to fill themselves into whatever role the character is presented as (pg.57). I think that video games can be raceless in first-person if the character you play as doesn’t speak and is never visible. Despite this, the experience of race and identity can still determine the ways the player controls the character. In the case of Shadow Warrior, the first-person perspective does not create a freely crafted identity but instead reflects a stereotyped depiction, a negative example of how race can be reflected in video games. As with algorithms and the internet, video games are built by the real experiences of those creating them and, by extension, those playing them. Video games are more influenced by race and identity because every part is determined by the developers, including the choice of what the player can control.
References:
Kolko, B. E., Nakamura, L., & Rodman, G. B. (2000). Race in Cyberspace: An Introduction. In B. E. Kolko, L. Nakamura, & G. B. Rodman (Eds.), Race in Cyberspace (pp. 1-13). Routledge.
Ow, J.A. (2000). The Revenge of the Yellowfaced Cyborg Terminator. In B. E. Kolko, L. Nakamura, & G. B. Rodman (Eds.), Race in Cyberspace (pp. 51-68). Routledge.
3 notes
·
View notes
Text
Blog Post #3! Due 2/13/25
Why do we refer to data collection and profiling as “progressive” despite its tendency to be biased against Black Americans and other POC? (Question based on Benjamin’s “Race After Technology” and Noble’s “Algorithms of Oppression”)
Technology, as put by Benjamin (2019), has a perceived “cloak of objectivity.” The tools, algorithms, and forms of data collection used in our reality are regarded as objective since they cannot “see” race. Technology as a whole, in my opinion, is often seen as universal and for everyone. Many people see phones, search engines, and other “tech” as tech without human involvement. Most people, including me, don’t think beyond what technology does for us—it just does, and we don’t question how it was made, who made it, and what biases might be embedded in the “cool, advanced, and innovative” new ais, virtual assistants, and search engines.
California gang database—why does law enforcement keep inaccurate databases that are difficult to change and easy to add onto? (Question Based on Benjamin’s “Race After Technology” and YJC report)
Gang databases, like the one featured in the report, seem to have a larger purpose in surveilling POC, namely Latino communities. The databases’ flawed design is effective for upholding inaccurate narratives of Latino and Black involvement in gang activity. The recording methods, which make it easy for these populations to be put in the database, inflates the true number of people with gang involvement. While a shot in the dark, I additionally think the “ease” provided by the broadness of the gang database is something that is seen as beneficial to the prison industry. I’m not entirely sure how it works, but I would assume that the number of people there to arrest/detain from the database boosts private prisons. The YJC report states that these databases are “widely used without evaluating their cost effectiveness or effectiveness in increasing community safety.” the database’s purpose is less about community safety and more about control over narratives and individuals.
How does the digital divide myth that POC, particularly Black Americans, are less interested in the internet still linger today? How is this idea of the digital divide reflected in social media? (Question based on Everett’s “The Revolution Will be Digitized”)
In Everett’s work (2002), she discusses the sphere of “cyberspace” often being associated with whiteness, with white users seen as the dominant demographic for technology and the internet. While this work was written about earlier internet use, I still see this pattern on social media. Specifically, I see this in the artist community on various social media sites such as Twitter, Instagram, or Tumblr. Something I’ve noticed is how often people are “surprised” to learn a particular artist/creator is Black if they’ve previously never disclosed it through being faceless or simply not stating this. I would say that this idea of the digital divide still persists because of our perception of internet/social media use. In my example of Black creators, the “surprise” in them being Black most likely comes from the assumption that they would be white. This is created from the “facelessness” associated with the internet.
In “Algorithms of Oppression,” Noble writes on racist Google Search results and Google’s position that it is “not responsible for its algorithm.” Then who is “responsible”? Can any one person/entity be held responsible for flawed algorithms? (Question based on Noble’s “Algorithms of Oppression”)
While I don’t think any entity can be responsible for such flawed algorithms, that only points to a more significant issue in their structure. Noble (2018) writes that “racism and sexism are part of the architecture and language of technology.”; algorithms and technology are based on flawed human ideas and prejudices and then reflect those prejudices. This makes it important to question and not solely rely on these systems. Responsibility, in this case, comes with doubting the system and checking it twice to see if the information it presents is accurate and unbiased. Google’s responsibility for changing the search results falls on them because they trusted the algorithm to be unbiased, which was false. Works Cited:
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity.
Everett, A. (2002). The Revolution Will Be Digitized: Afrocentricity and the Digital Public Sphere.
Youth Justice Coalition (2012). Tracked and trapped. Youth of color, gang databases, and gang injunctions. https://youthjusticela.org/wp-content/uploads/2012/12/TrackedandTrapped.pdf
7 notes
·
View notes
Text
Blog Post #2! Due 2/06/25
What issues could the concept of “identity tourism” and online anonymity bring to feminist efforts online? (Question based off “Rethinking Cyberfeminism”- J.Daniels)
The lack of “visibility” of one’s identity online can be seen as positive or negative. When thinking of issues, the first thing that comes to mind is the potential of someone using their anonymity to pretend to be someone they are not. While this is not always done for malicious purposes, in the case of feminist movements, this can be done to much detriment. There’s a potential risk of someone posing as leadership and spreading misinformation or acting in a way that puts a negative view on the organization/individual being targeted by this behavior. Additionally, online anonymity makes it incredibly easy for investigation to be done. I would worry about this happening, for example, if a feminist organization was using the internet as a pathway to work around legal/social limitations in real life. Anonymity could provide yet another outlet for bad players to exploit feminist movements.
Why do we continue to rely on automated technology/algorithms in important areas such as law or healthcare despite its continued penchant for error? (Question based off “Automating Inequality” – Eubanks & "Race and Technology" - Nicole Brown)
As Eubanks (2018) went over in her piece, this automated technology is efficient despite its many issues. To the broader public, it seems to be both efficient and effective. I believe that people don’t often question these automated systems, believing them to be sophisticated without a penchant for error. In the context of the broader public, they may not question them because they are used by “official” means. When they do show errors that are biased against POC, I think those errors oftentimes (sadly) fall in line with those same prejudices and thought processes. It reinforces the narratives deeply ingrained in these countries’ thought processes around POC and those who need social services. This only takes people further away from the possibility of questioning algorithms.
Additionally, why hasn’t much effort been made to create better alternatives for these systems or replace them? (Question based off “Automating Inequality” – Eubanks & Nicole Brown)
On top of falling in line with prejudices, another thing that keeps these types of algorithms and systems around is the financial aspects. For example, Nicole Brown mentions the hiring apps that use facial recognition to determine the best candidates. According to Brown (2020), these hiring apps are trained on data sets that are made up mainly of photos of white and male people. A solution to this would be to make a more diverse data set or code a new algorithm that isn’t purely based on photos, right? The problem with these solutions is that they would cost more money—if the system in place already “works” for those it benefits, why would they want to waste money trying to fix it? These algorithms streamline the processes for large businesses and save them money and time at the cost of inaccurate profiling and unbiased opportunities.
How do online spaces reproduce the social structures and biases of the real world? (Question based off Daniels)
In online spaces, the predominance and highlighting of white voices is reflected on different platforms. According to Daniels, online spaces function on the expectation of whiteness. Much like in real life, an overwhelming majority of whiteness in online spaces makes it difficult for others outside of the “dominant” culture being represented to feel comfortable in breaking the mold (Daniels 2009).
Works Cited:
Daniels, J. (2009). Rethinking cyberfeminism(s): Race, gender, and embodiment. Women’s Studies Quarterly.
Brown, N. (2020). Race and technology [Video]. YouTube.
Eubanks, V. (2018). Automating inequality: how high-tech tools profile, police, and punish the poor. St. Martin's Press.
4 notes
·
View notes
Text
Blog Post #1
My story begins a few weeks before finals week of last semester, Fall 2024. For Christmas, I had gotten myself a brand new Playstation 5 after wanting a new console for years. My previous console, a PS3, had quite literally burned itself out from being left on for too long in my childhood. I loved that console, and many cherished memories were made with it before it decided it was done working. I think it’s safe to say that I already had some previous problems with PlayStation consoles, and I would have reason to be wary about getting another. Despite this, I was ready to give the PlayStation another chance to prove itself to me. I got my brand new, shiny PS5 early in the month. There was a problem, though; when it finally arrived at my house, the box was open! I had the perfectly reasonable reaction of freaking out and being worried about the console being tampered with or having something missing. Everything turned out alright, but this was only the first stress this console would give me. I had been waiting until I was finally done with my final papers to unbox and set up the console; It was like a reward. After a long few weeks of suffering through writing and revisions, I took the time to set up my console in my room. I was so excited to try it out and finally play all the games I had dreamed about in my downtime! But then, the issues started. For whatever reason, the audio would begin to cut out randomly for no apparent reason. The way the PS5 audio works is that you need to plug your headphones into the controller, so both the controller and your audio rely on a Bluetooth connection. This, however, causes me much anguish. The solution to the audio cutting out is to simply reset the controller using the tiny button on the back of it. However, sometimes, that method causes the controller to stop working entirely, and then I have to reset the whole console. It’s incredibly annoying! You’d think after spending so much money on something, it would work correctly without being a pain in the neck. I still love my console, and I do not regret getting it, even if that audio issue gets to me.
2 notes
·
View notes