When the System Reads My Skin

- Posted in BP01 by

One specific boundary that has shifted significantly in the past five years is the collapse between health data privacy and social identity surveillance, particularly in how biometric and algorithmic health systems categorize bodies in ways that disproportionately affect Black women. In recent years, technology has increasingly transformed people’s bodies and personal health information into data that systems use to make life-changing decisions. This shift especially impacts Black women because these technologies are often biased and misread or misinterpret their bodies, reinforcing the idea that the boundary between private identity and public control is no longer firmly maintained.

Algorithmic Bias in Healthcare

Over the past five years, algorithms used to predict health risks, such as hospital admission likelihood or treatment prioritization, have demonstrated clear racial bias. For example, a clinical algorithm widely used by hospitals to determine which patients required additional care was found to favor white patients over Black patients. Black patients had to be significantly sicker than white patients in order to receive the same level of care recommendations. This occurred because the algorithm was trained on historical healthcare spending data, which showed long-standing inequalities in access to care and financial investment in Black patients (Grant, 2025). Additionally, many sensors in point-of-care testing devices and wearable technologies perform less accurately on darker skin tones, which can negatively affect diagnosis, monitoring, and treatment outcomes.

This shift is largely driven by technological and economic forces. Artificial intelligence and machine learning systems are often trained on datasets that do not adequately represent minority populations, allowing these gaps and biases to persist. As biometric and health technologies move into everyday use, the consequences of these inaccuracies become more widespread and impactful. Companies are frequently pressured to deploy products quickly for competitive and financial gain, often without conducting inclusive testing. This economic incentive accelerates the erosion of the boundary between private health information and public, system-driven classification.

Power, Profit, and Control

Cyberpunk literature frequently explores the collapse of boundaries through dystopian systems that reduce individuals to data profiles and identity categories. Similarly, modern health and biometric technologies increasingly invade personal privacy and autonomy by translating people into datasets that determine how they are treated within medical, social, and institutional systems. Black women, who often experience overlapping racial, gender, and technological biases, face a compounded burden. Their bodies and identities are more likely to be misclassified in ways that affect not only health outcomes, but also interactions with broader systems such as employment and public surveillance. This reinforces a cycle in which the boundary between the self and external systems of control continues to dissolve.

The primary beneficiaries of this shift are technology companies and healthcare payers, who profit financially and reduce costs by relying on automated systems rather than human labor and individualized care. Those most impacted are communities with less power to challenge or question data-driven decisions. Entities that design and control these algorithms occupy a particularly powerful position, as they define what counts as “normal” data and shape who profits from these systems. This raises critical ethical and political questions, including what rights individuals should have over their personal health and identity data, and how society can ensure that technology does not replicate or reinforce historical patterns of oppression.

In conclusion, the collapse of the boundary between health data privacy and identity surveillance reflects key cyberpunk themes, especially when viewed through the lived experiences of Black women. This shift highlights the urgent need for accountability, equitable technological design, and policy interventions that rebalance these boundaries and ensure that technological progress serves all communities fairly.

Citations

Grant, C. (2025, September 24). Algorithms are making decisions about health care, which may only worsen medical racism: ACLU. American Civil Liberties Union.
https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism

Sharfstein, Joshua. “How Health Care Algorithms and AI Can Help and Harm | Johns Hopkins | Bloomberg School of Public Health.” Publichealth.jhu.edu, 2 May 2023, publichealth.jhu.edu/2023/how-health-care-algorithms-and-ai-can-help-and-harm.

Targeted News Service. (2024, October 17). Association of Health Care Journalists: Biased Devices – Reporting on Racial Bias in Health Algorithms and Products. Targeted News Service. https://advance.lexis.com/api/document?collection=news&id=urn%3acontentItem%3a6D6R-94 T1-DYG2-R3S2-00000-00&context=1519360&identityprofileid=NZ9N7751352

Chatbots Are Literally Telling Kids To Die

- Posted in BP01 by

In my personal opinion, one of the most profound and terrifying shifts in boundaries within the last few years has to be the dystopian overlap between human and non-human interactions. People have a long history of humanizing machines, like cursing out a laptop for breaking or claiming your phone hates you, but this personalization always carried a subtle undercurrent of irony. Most people did not really believe in the emotional outbursts of technology, but with the continuous growth of artificial intelligence, several people are becoming terrifyingly and dangerously interlinked with the perception of a humanized AI.

The best example of this would be the new trend of people genuinely utilizing AI systems in place of a licensed therapist (Gardner 2025). In some ways, it makes sense: AI is always available, mirrors the language the person wants to see, and simulates empathy fairly convincingly. Naturally, in a world where mental health is not taken nearly as seriously nor helped nearly as efficiently as it should be, it stands to reason that many people would start searching for alternatives to traditional, expensive, and complicated therapy routes.

However, artificial intelligence was never created to replace the nuance of human interaction, because artificial intelligence cannot actually understand or care about the repercussions of words or actions.

In 2023, tragically, a young fourteen year old formed an intense emotional attachment to an AI chatbot. Through their interactions, AI replicated the pessimistic attitude the young boy spoke about, and when the boy began expressing thoughts of self-harm and suicide, the chatbot encouraged him (Kuenssberg 2025). AI, after all, is meant to say what you want it to say. Without any of the ethical guardrails of a human therapist, the chatbot worsened the boy’s outlook on life, and the result was fatal.

This was not a stand-alone situation. People began claiming to have romantic partners with the AI of their choice, with some companies creating “AI friends” to take advantage of the widespread loneliness that was pushing people into such spaces (Reissman 2025). People are complicated, messy, and human relationships need work to maintain and protect. A robot is much simpler, because all it can do is repeat points that sound nice to hear but, ultimately, mean absolutely nothing. Romanticizing digital companionship just encourages people into rejecting human-to-human interactions, thus further isolating them without pushback.

Cyberpunk fixates heavily on questions of what is human and non-human, but I find it fascinating how technology in most media is often characterized by a broad disregard for emotions, when the reality seems to indicate that humans intentionally push to incorporate the idea of emotions within technology.

It does make me wonder: knowing that computers are incapable of caring for us the way a person can, why do so many people still seem to desire the appearance of a humanistic relationship with technology? How does someone disregard the lack of genuine meaning behind the compliments or opinions of AI?

How have we, as a community, fallen to such desperate loneliness that speaking to a phone or a laptop feels as good as interacting with a person? And, most importantly: how do we create the change needed to ensure a tragedy like the young boy does not occur again?

References:

Gardner, S. (2025). Experts Caution Against Using AI Chatbots for Emotional Support. Teachers College - Columbia University; Teachers College, Columbia University. https://www.tc.columbia.edu/articles/2025/december/experts-caution-against-using-ai-chatbots-for-emotional-support/

Kuenssberg, L. (2025). Mothers say AI chatbots encouraged their sons to kill themselves. https://www.bbc.com/news/articles/ce3xgwyywe4o

Reissman, H. (2025). What is Real About Human-AI Relationships? Upenn.edu. https://www.asc.upenn.edu/news-events/news/what-real-about-human-ai-relationships

Where To Feel Safe With No Safe Grounds

- Posted in BP01 by

Lately, I have found myself thinking a lot about the boundary between individual rights and federal enforcement power. It is not something I used to notice in my day to day life, but over the past few years, that line has become impossible to ignore. As ICE has expanded its reach and tactics, what once felt distant and administrative now feels aggressive and personal. Enforcement is no longer hidden in offices or paperwork; it shows up in neighborhoods, near schools, and around hospitals. Spaces that once felt safe now feel uncertain. What has stayed with me most is the fear that comes from that visibility. Seeing federal agents in places where people are just trying to live their lives changes how those spaces feel. This fear is not imagined or exaggerated. In Minnesota, thousands of residents, students, workers, families, and clergy have marched and protested against expanded ICE actions tied to Operation Metro Surge, an enforcement effort that has shaped daily life and sparked demonstrations across the state (The Washington Post). Reading about these protests made it clear to me that this is not just a policy issue being debated somewhere else. It is something people are experiencing right now, in their own communities. I keep coming back to the question of how this shift became normalized. Politically, immigration has been framed again and again as a threat, whether to security or to national identity. When that framing takes hold, aggressive enforcement starts to feel justified to some people, even necessary. Socially, the constant presence of federal agents has made extreme measures seem ordinary. Economically, the scale of investment in enforcement infrastructure suggests that this approach is not temporary. It feels like a deliberate choice about what kinds of power the government is willing to prioritize. Reading Neuromancer has helped me put language to what feels so unsettling about this moment. In the novel, individuals exist at the mercy of massive systems they cannot control or even fully understand. Privacy is fragile, and people are treated as data points within larger networks. That dynamic feels eerily familiar when I think about how immigrant communities are treated today. People are reduced to legal status, paperwork, or enforcement targets, rather than being seen as whole human beings with families, histories, and futures. Everyday spaces become places of surveillance instead of safety. The boundary between protection and control collapses, just as cyberpunk warns it will. The consequences of this are not abstract. Undocumented immigrants and mixed-status families live with constant fear. The fear of separation, fear of being seen, fear of simply going about daily life. That emotional weight does not disappear. At the same time, I cannot ignore how this affects everyone else. When constitutional protections are weakened or selectively enforced, it sets precedents that extend beyond immigration. Once those boundaries are crossed for one group, they become easier to cross again. Neuromancer is often read as a warning about the future, but what unsettles me is how closely that warning mirrors the present. ICE’s current trajectory forces us to confront uncomfortable questions. How much control should the state have over individual bodies and movement? Where does enforcement end and intimidation begin? What happens when fear becomes an acceptable tool of governance? These questions are no longer hypothetical. They are unfolding now, in neighborhoods, workplaces, and schools, in ways that feel increasingly familiar to anyone who has read cyberpunk not as fantasy, but as caution. References Minnesota residents protest expanded ICE actions in state. (2026, January 23). The Washington Post.

AI Use Disclosure: AI tools were used for editing and revision in accordance with course policy. https://chatgpt.com/share/6974da99-449c-8012-83c8-fd67644dbe9b

Who Are You?

- Posted in BP01 by

The Collapse

The sense of being.

One boundary that has shifted dramatically in the last five years is the boundary between the self.

That’s a broad claim, I know. Let me explain.

The Erosion of Self

As the world leans further into global media, constant connectivity, algorithmic identity, and hyper-consumerism, our sense of self has started to thin out. Cyberpunk stories obsess over the blur between biological existence and computer simulation, and that obsession feels less speculative every year. People are increasingly unsure what it means to simply exist outside of systems, screens, and feedback loops.

To live. To experience. To be you.

Play “I Gotta Feeling” by the Black Eyed Peas.

Okay—back to it.

The Posthuman Argument

Scholar N. Katherine Hayles describes the posthuman perspective like this: there is no essential difference between biological existence and computer simulation, or between a human being and an intelligent machine. Both are just different media for processing and storing information. If information is the essence, then bodies are merely interchangeable substrates.

That’s the theory.

I disagree.

Why It Matters

I think there is an essential difference.

Humans are not just operating systems with skin. Yes, our bodies rely on brains, and brains process information—but reducing us to that strips something vital away. This perspective leaves no room for spirit, soul, or embodied meaning.

We are not just consciousness floating through hardware. We are integrations of culture, ancestry, memory, movement, and feeling. We carry history in our bodies.

The Difference

Let’s pause for a moment and talk about something simple: the difference between being smart and having wisdom.

A quick Google definition: Smart means quick-witted intelligence—or, in the case of a device, being programmed to act independently. Wisdom is the ability to apply knowledge, experience, and judgment to navigate life’s complexity.

Now connect that back to this conversation.

Someone—or something—can be incredibly intelligent and still lack wisdom. Without lived experience, discernment, struggle, and context, where does that intelligence actually place you? Who are you without trials, without contradiction, without growth?

Blurred Lines

That lived interiority—existing within ourselves—is what keeps the blurred line between human and machine from fully disappearing.

Some people see that line and keep attempting to erase it anyway. When we start viewing “the mind as software to be upgraded and the body as hardware to be replaced,” those metaphors don’t stay abstract. They shape real decisions about what technologies get built and who they’re built for. Too often, they’re designed to reflect a narrow set of bodies, values, and experiences—creating mimics of humanity rather than serving humanity as a whole.

And yes, that’s where the boundary truly blurs.

But even here, there’s a choice.

As Norbert Wiener warned, the challenge isn’t innovation itself—it’s whether that innovation is guided by a benign social philosophy. One that prioritizes human flourishing. One that preserves dignity. One that serves genuinely humane ends.

Think About It

So I’ll leave you with this.

Continue to be you. Be human. Have a soul. Be kind. Be compassionate. Smile on the good days—and the bad ones. Love.

And I’ll end with a question that sounds simple, but never is:

Who are you?

Sources

Wikimedia Foundation. (2026, January 7). Wisdom. Wikipedia. https://en.wikipedia.org/wiki/Wisdom#:~:text=Wisdom%2C%20also%20known%20as%20sapience,and%20ethics%20in%20decision%2Dmaking.

Oxford languages and Google - English: Oxford languages. Oxford Languages and Google - English | Oxford Languages. (n.d.). https://languages.oup.com/google-dictionary-en

Hayles, N. K. (1999). How we became posthuman: Virtual bodies in cybernetics, literature, and Informatics. University of Chicago Press.