When the System Reads My Skin

One specific boundary that has shifted significantly in the past five years is the collapse between health data privacy and social identity surveillance, particularly in how biometric and algorithmic health systems categorize bodies in ways that disproportionately affect Black women. In recent years, technology has increasingly transformed people’s bodies and personal health information into data that systems use to make life-changing decisions. This shift especially impacts Black women because these technologies are often biased and misread or misinterpret their bodies, reinforcing the idea that the boundary between private identity and public control is no longer firmly maintained.

Algorithmic Bias in Healthcare

Over the past five years, algorithms used to predict health risks, such as hospital admission likelihood or treatment prioritization, have demonstrated clear racial bias. For example, a clinical algorithm widely used by hospitals to determine which patients required additional care was found to favor white patients over Black patients. Black patients had to be significantly sicker than white patients in order to receive the same level of care recommendations. This occurred because the algorithm was trained on historical healthcare spending data, which showed long-standing inequalities in access to care and financial investment in Black patients (Grant, 2025). Additionally, many sensors in point-of-care testing devices and wearable technologies perform less accurately on darker skin tones, which can negatively affect diagnosis, monitoring, and treatment outcomes.

This shift is largely driven by technological and economic forces. Artificial intelligence and machine learning systems are often trained on datasets that do not adequately represent minority populations, allowing these gaps and biases to persist. As biometric and health technologies move into everyday use, the consequences of these inaccuracies become more widespread and impactful. Companies are frequently pressured to deploy products quickly for competitive and financial gain, often without conducting inclusive testing. This economic incentive accelerates the erosion of the boundary between private health information and public, system-driven classification.

Power, Profit, and Control

Cyberpunk literature frequently explores the collapse of boundaries through dystopian systems that reduce individuals to data profiles and identity categories. Similarly, modern health and biometric technologies increasingly invade personal privacy and autonomy by translating people into datasets that determine how they are treated within medical, social, and institutional systems. Black women, who often experience overlapping racial, gender, and technological biases, face a compounded burden. Their bodies and identities are more likely to be misclassified in ways that affect not only health outcomes, but also interactions with broader systems such as employment and public surveillance. This reinforces a cycle in which the boundary between the self and external systems of control continues to dissolve.

The primary beneficiaries of this shift are technology companies and healthcare payers, who profit financially and reduce costs by relying on automated systems rather than human labor and individualized care. Those most impacted are communities with less power to challenge or question data-driven decisions. Entities that design and control these algorithms occupy a particularly powerful position, as they define what counts as “normal” data and shape who profits from these systems. This raises critical ethical and political questions, including what rights individuals should have over their personal health and identity data, and how society can ensure that technology does not replicate or reinforce historical patterns of oppression.

In conclusion, the collapse of the boundary between health data privacy and identity surveillance reflects key cyberpunk themes, especially when viewed through the lived experiences of Black women. This shift highlights the urgent need for accountability, equitable technological design, and policy interventions that rebalance these boundaries and ensure that technological progress serves all communities fairly.

Citations

Grant, C. (2025, September 24). Algorithms are making decisions about health care, which may only worsen medical racism: ACLU. American Civil Liberties Union.
https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism

Sharfstein, Joshua. “How Health Care Algorithms and AI Can Help and Harm | Johns Hopkins | Bloomberg School of Public Health.” Publichealth.jhu.edu, 2 May 2023, publichealth.jhu.edu/2023/how-health-care-algorithms-and-ai-can-help-and-harm.

Targeted News Service. (2024, October 17). Association of Health Care Journalists: Biased Devices – Reporting on Racial Bias in Health Algorithms and Products. Targeted News Service. https://advance.lexis.com/api/document?collection=news&id=urn%3acontentItem%3a6D6R-94 T1-DYG2-R3S2-00000-00&context=1519360&identityprofileid=NZ9N7751352