When the System Reads My Skin

- Posted in BP01 by

One specific boundary that has shifted significantly in the past five years is the collapse between health data privacy and social identity surveillance, particularly in how biometric and algorithmic health systems categorize bodies in ways that disproportionately affect Black women. In recent years, technology has increasingly transformed people’s bodies and personal health information into data that systems use to make life-changing decisions. This shift especially impacts Black women because these technologies are often biased and misread or misinterpret their bodies, reinforcing the idea that the boundary between private identity and public control is no longer firmly maintained.

Algorithmic Bias in Healthcare

Over the past five years, algorithms used to predict health risks, such as hospital admission likelihood or treatment prioritization, have demonstrated clear racial bias. For example, a clinical algorithm widely used by hospitals to determine which patients required additional care was found to favor white patients over Black patients. Black patients had to be significantly sicker than white patients in order to receive the same level of care recommendations. This occurred because the algorithm was trained on historical healthcare spending data, which showed long-standing inequalities in access to care and financial investment in Black patients (Grant, 2025). Additionally, many sensors in point-of-care testing devices and wearable technologies perform less accurately on darker skin tones, which can negatively affect diagnosis, monitoring, and treatment outcomes.

This shift is largely driven by technological and economic forces. Artificial intelligence and machine learning systems are often trained on datasets that do not adequately represent minority populations, allowing these gaps and biases to persist. As biometric and health technologies move into everyday use, the consequences of these inaccuracies become more widespread and impactful. Companies are frequently pressured to deploy products quickly for competitive and financial gain, often without conducting inclusive testing. This economic incentive accelerates the erosion of the boundary between private health information and public, system-driven classification.

Power, Profit, and Control

Cyberpunk literature frequently explores the collapse of boundaries through dystopian systems that reduce individuals to data profiles and identity categories. Similarly, modern health and biometric technologies increasingly invade personal privacy and autonomy by translating people into datasets that determine how they are treated within medical, social, and institutional systems. Black women, who often experience overlapping racial, gender, and technological biases, face a compounded burden. Their bodies and identities are more likely to be misclassified in ways that affect not only health outcomes, but also interactions with broader systems such as employment and public surveillance. This reinforces a cycle in which the boundary between the self and external systems of control continues to dissolve.

The primary beneficiaries of this shift are technology companies and healthcare payers, who profit financially and reduce costs by relying on automated systems rather than human labor and individualized care. Those most impacted are communities with less power to challenge or question data-driven decisions. Entities that design and control these algorithms occupy a particularly powerful position, as they define what counts as “normal” data and shape who profits from these systems. This raises critical ethical and political questions, including what rights individuals should have over their personal health and identity data, and how society can ensure that technology does not replicate or reinforce historical patterns of oppression.

In conclusion, the collapse of the boundary between health data privacy and identity surveillance reflects key cyberpunk themes, especially when viewed through the lived experiences of Black women. This shift highlights the urgent need for accountability, equitable technological design, and policy interventions that rebalance these boundaries and ensure that technological progress serves all communities fairly.

Citations

Grant, C. (2025, September 24). Algorithms are making decisions about health care, which may only worsen medical racism: ACLU. American Civil Liberties Union.
https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism

Sharfstein, Joshua. “How Health Care Algorithms and AI Can Help and Harm | Johns Hopkins | Bloomberg School of Public Health.” Publichealth.jhu.edu, 2 May 2023, publichealth.jhu.edu/2023/how-health-care-algorithms-and-ai-can-help-and-harm.

Targeted News Service. (2024, October 17). Association of Health Care Journalists: Biased Devices – Reporting on Racial Bias in Health Algorithms and Products. Targeted News Service. https://advance.lexis.com/api/document?collection=news&id=urn%3acontentItem%3a6D6R-94 T1-DYG2-R3S2-00000-00&context=1519360&identityprofileid=NZ9N7751352

Identity 2.0: When Your Face Becomes Your Passport, Wallet, and Citizenship

- Posted in Uncategorized by

enter image description here

In a cyberpunk world, identity isn’t just who we are—it’s what corporations and governments can verify, commodify, and control. Today, the boundary between physical identity and digital identity is eroding. What once was a legal document in a wallet is now a constellation of biometric scans, mobile IDs, and digital wallets that follow us everywhere we go. This isn’t tomorrow’s speculation—it’s happening now.

The Boundary That Has Shifted

Historically, identity was rooted in the physical: passports, birth certificates, social security cards. In the digital age, identity became credentials we entered online—usernames, passwords, PINs. But in 2025 digital identity systems are increasingly biometric, mobile, and machine-readable, blurring the line between who you are and what a machine recognizes you as.

Governments and corporations are building systems that link your face, fingerprint, voice, or palm directly to essential services like travel, banking, healthcare, and even public benefits. The European Union’s eIDAS 2.0 initiative is creating a digital identity wallet usable across all member states, promising convenience but also redefining what it means to prove who you are in a digital society.

Meanwhile, biometric techniques—once exotic—now fuel everyday authentication. From palm biometrics in stores and hospitals to mobile IDs on a phone, the move toward identity tied to our bodies rather than passwords is accelerating.

What’s Driving the Shift

Technological forces: Biometric systems and mobile identity standards have improved dramatically. Industry reports show passwordless authentication increasingly replacing traditional login methods, with biometrics offering convenience and security advantages—at least superficially.

Economic incentives: Tech companies and governments alike see huge value in digital identity platforms. They reduce fraud, streamline services, and open doors to new monetizable data streams. No database is just for ID anymore—it’s also a goldmine for behavior, spending patterns, and social metrics.

Political and social pressures: The push for digital identity isn’t just consumer convenience. Governments argue it enhances security, prevents fraud, and enables digital citizenship in an era of global mobility. But critics warn that once biometric identity systems become ubiquitous, opting out becomes increasingly difficult.

How This Connects to Cyberpunk

Cyberpunk fiction vividly illustrates worlds where identity is mutable, encoded, and monitored by systems beyond individual control. In Neuromancer or Snow Crash, identity chips, corporate databases, and neural codes make every person traceable and manipulable. Today’s digital identity systems reflect that logic: your face, your palm, your biometric signature becomes a node in a global network, shaped by technical architectures and power structures.

Cyberpunk theory teaches us to see how technologies don’t merely serve users but also reshape social relations. The transition to biometric, mobile IDs recasts identity itself as something processable, shareable, and surveilled—no longer purely personal, but infrastructural.

Who Benefits—and Who’s at Risk?

Potential benefits:

  1. Faster border crossings and secure travel documentation.
  2. Passwordless security that reduces traditional cyber-attacks.
  3. Access to services for people without traditional documentation.

Risks and harms:

  1. Surveillance and privacy erosion: Biometric systems can track movements across spaces, linking online and offline behaviors in ways never before possible.

  2. Exclusion and inequality: Individuals without compatible devices or digital literacy risk being shut out of essential systems.

  3. Permanent identifiers: Unlike passwords, biometric traits cannot be changed. If compromised, your faceprint or fingerprint is compromised for life.

These concerns echo fundamental cyberpunk anxieties about surveillance, agency, and control. When identity becomes a data point indexed and algorithmically processed, the human subject transforms into a profile—a mathematical object to be scored, categorized, and predicted.

Ethical Questions We Must Ask

Consent or coercion? When a digital ID is required for basic services, can consent truly be voluntary?

Who controls your identity? Is it a corporate cloud, a nation-state database, or the individual themselves?

What happens when borders are digital rather than physical? There’s a powerful allure to seamless global identity—but also a danger of borderless surveillance.

Understanding the collapse between physical and digital identity is urgent because it affects every person with a smartphone, a passport, or an online presence. The question isn’t whether identity is changing—but whether we will shape that change or be shaped by it.

APA-Style References

European Commission. (2025). eIDAS 2.0 digital identity wallet framework. TRUSTECH. https://www.trustech-event.com/en/event/news/digital-identity-trends-2025

Akhison, G. (2025). Towards a universal digital identity: A blockchain-based framework for borderless verification. Frontiers in Blockchain. https://www.frontiersin.org/journals/blockchain/articles/10.3389/fbloc.2025.1688287/full

Demystify Biometrics. (2025). Biometrics & digital identity: Top 5 trends. https://www.demystifybiometrics.com/post/march-2025-biometrics-digital-identity-top-5-trends

Digital identity in 2025: biometric wallets and privacy dilemmas. (2025). RTechnology. https://rtechnology.in/articles/1050/digital-identity-in-2025-biometric-wallets-and-privacy-dilemmas

Le Monde. (2025, September 1). The discreet rise of facial recognition around the world. https://www.lemonde.fr/en/pixels/article/2025/09/01/the-discreet-rise-of-facial-recognition-around-the-world_6744911_13.html

Strathmore University CIPIT. (2024). Global biometric and digital identity trend analysis (Global Report). https://cipit.strathmore.edu/wp-content/uploads/2024/05/Global-BDI-Trend-Analysis-Geographical-Assessment-Final-Approval-06.09.2023-compressed.

OpenAI. (2026). Digital identity and biometrics in everyday life [AI-generated image]. https://www.openai.com/dall-e

Personal Privacy in the Digital Age

- Posted in BP01 by

enter image description here

Personal Privacy in the Digital Age

One of the defining features of cyberpunk fiction is the breakdown of boundaries between humans and machines, nations and corporations, and especially between public and private life. What once felt like a dystopian exaggeration is increasingly becoming reality. Over the past five years, the boundary between personal privacy and corporate/governmental surveillance has shifted dramatically. The line separating what belongs to the individual and what can be collected, analyzed, and sold has grown thinner than ever before. A clear contemporary example of collapsing privacy boundaries is emerging in Edmonton, where police have launched a pilot program using body cameras equipped with AI to recognize faces from a “high-risk” watch list in real time. What was once seen as intrusive or ethically untenable—the use of facial recognition on wearable devices—has now moved into operational testing in a major Canadian city, prompting debate from privacy advocates and experts about the societal implications of such pervasive surveillance.

Expanding Data Collection

Today’s apps and platforms gather far more than basic profile information. Social media companies track users’ locations, browsing habits, interactions with AI tools, and even behavioral patterns across different websites. For example, updates to privacy policies from major platforms like TikTok and Meta now allow broader data harvesting, often as a condition for continued use. Many users unknowingly exchange massive amounts of personal information simply to stay connected.

## The Rise of Biometric Surveillance Facial recognition technology has moved from science fiction into everyday life. Law enforcement agencies increasingly use AI-powered systems to scan crowds, identify individuals, and track movements in real time. While these tools are promoted as improving public safety, they blur the boundary between public presence and constant monitoring. People can now be identified and recorded without their knowledge or consent.

## Uneven Legal Protections Some governments have attempted to respond with new privacy laws, such as the European Union’s AI regulations and stricter data protection frameworks in countries like India. These laws aim to limit how companies collect and use personal information. However, regulations remain fragmented and often struggle to keep pace with rapidly advancing technologies. This leaves significant gaps where corporations can continue exploiting personal data.

What’s Driving This Shift?

Technology

Advances in AI and big data analytics make it incredibly easy to process enormous amounts of personal information. Facial recognition, predictive algorithms, and personalized advertising rely on constant surveillance to function. ## Economics Personal data is now one of the most valuable resources in the digital economy. Companies profit from targeted advertising, AI training, and personalized services built entirely on user information. Privacy has effectively become a currency.

Who Benefits—and Who Pays the Price?

Beneficiaries

  • Tech corporations that profit from user data

  • Governments that gain expanded surveillance capabilities

Those Impacted

  • Everyday individuals losing control over personal information
  • Marginalized communities disproportionately targeted by surveillance technologies
  • People wrongfully identified by biased AI systems

Associated Press. (2024). AI-powered police body cameras, once taboo, get tested on Canadian city’s “watch list” of faces. AP News.1[https://apnews.com/article/21f319ce806a0023f855eb69d928d31e

Blog Post #1: Eyes Everywhere; AI Surveillance

- Posted in BP01 by

Ever wonder who watches surveillance cameras beyond federal agents, police, and security personnel? Artificial intelligence has become quiet yet incredibly advanced—capable of tracking personal information and recognizing faces with astonishing accuracy. But where does AI store this information, and who has access to it?

Before the rise of AI, surveillance systems relied on continuous 24/7 recording that had to be carefully monitored by human caretakers. These individuals ensured that footage was not distorted, corrupted, or lost due to limited storage space. According to the Security Industry Association, AI can monitor and analyze network traffic in real time, strengthening network security and identifying suspicious activities such as unauthorized access attempts or unusual data transfers. When these activities are detected, users can take immediate action to block or contain potential threats.

enter image description here

While many argue that AI improves security, it also introduces significant challenges. One major concern is security breaches, as AI systems themselves can become targets for cyberattacks. Another issue is compliance, which is essential to avoid legal consequences and requires adherence to national and international regulations governing the use of AI. Addressing these concerns may require collaboration not only with AI technologies themselves but also with AI developers, cybersecurity professionals, and regulatory experts. AI holds the promise of a more holistic approach to security; however, many people place trust in AI without fully understanding where their data is stored or how it is used.

This shift reflects a cyberpunk-like reality where high technology is paired with low transparency where advanced technologies coexist with humans in everyday life. Surveillance cameras are now embedded into our devices, networks, and infrastructure, allowing AI to operate with minimal human oversight.

Facial recognition has advanced significantly over the decades and has blended seamlessly into daily life. According to Critical Tech Solutions, AI facial recognition combines imaging, pattern recognition, and neural networks to analyze and compare facial data. This process typically involves three steps: capturing facial data, converting faces into digital templates, and matching and verification.

As we progress in today’s world, AI will continue to grow smarter, stronger, and more human-like. It is ultimately our responsibility to establish boundaries to ensure that AI does not override human authority or become a tool for harm.

Sources

Dorn, M. (2025a, November 18). Understanding AI facial recognition and its role in public safety. Tech Deployments Made Simple by Critical Tech Solutions. https://www.criticalts.com/articles/ai-facial-recognition-how-it-works-for-security-safety/

Dorn, M. (2025, December 30). How ai surveillance transforms modern security. Tech Deployments Made Simple by Critical Tech Solutions. https://www.criticalts.com/articles/how-ai-surveillance-transforms-modern-security/

Galaz, V. (n.d.). Sciencedirect.com | Science, Health and medical journals, full text articles and books. ScienceDirect. https://www.sciencedirect.com/science/article/am/pii/S0160791X21002165

James Segil, M. S. (2024, April 23). How ai can transform integrated security. Security Industry Association. https://www.securityindustry.org/2024/03/19/how-ai-can-transform-integrated-security/

https://chatgpt.com/share/697574ec-b270-8003-8613-1bbb06691394

ChatGPT was used to craft an AI image and to revise my original thoughts to a more clear and organized writings.

A Letter to My CIA Agent

- Posted in BP01 by

Dear Sir, Madam, or Algorithm,

I assume you are reading this. Not because I have done anything remarkable, but because in a world shaped by digital systems, observation has become routine rather than something exceptional.

Five years ago, I still thought of privacy as something I possessed, imperfectly, maybe, but meaningfully. I assumed that my movements, conversations, and online habits were largely my own unless I chose to share them. That assumption has quietly worn away. Not through a single policy change or technological breakthrough, but through countless small decisions like agreeing to terms of service, enabling location access, and storing personal information in the cloud.

There was no clear moment when the boundary disappeared. It simply stopped being visible.

What has shifted most in recent years is not the existence of surveillance, but its structure. Governments increasingly rely on private companies to collect and organize personal data and then access it through legal requests or market transactions. According to reporting by Proton, authorities worldwide, particularly in the United States of course, have dramatically increased their requests for user data from major technology firms, often with limited transparency and oversight (Koch, 2025). In this arrangement, corporate data collection and state surveillance are no longer meaningfully separate.

This shift reflects a broader normalization of data as a form of currency. Individuals exchange personal information for convenience, connectivity, and access to digital services. Companies monetize that data. Governments acquire it. Each step is justified as efficient, legal, or necessary. However, when taken together, they blur the line between consent and compliance.

The American Civil Liberties Union has documented how U.S. agencies such as the Department of Homeland Security have purchased location data from brokers rather than obtaining warrants, effectively bypassing constitutional safeguards (Venkatesh & Yu, 2026). While the proponents argue this practice operates within existing legal frameworks, it raises important questions about whether privacy protections remain meaningful when personal data is treated as a commodity.

Similar patterns appear beyond the United States. In Jordan, authorities reportedly used phone-extraction tools to access activists’ devices, targeting political dissent through technological means (Kirchgaessner, 2026). These cases highlight how surveillance technologies are easily transferred across borders and contexts, and how they often impact those already vulnerable to state power.

Even technical protections such as encryption, which are framed as firm barriers to access, prove now to be conditional. In early 2026, Microsoft confirmed that it provided encryption keys to U.S. authorities when legally compelled to do so, prompting concern among privacy advocates about precedent and potential misuse (O’Brien, 2026). Security, it seems, depends less on technological limits than on institutional trust.

To be clear, surveillance systems are frequently defended on grounds of public safety, efficiency, and national security. These concerns deserve serious consideration. Yet the collective effect of extensive data collection and expanded access warrants equally serious scrutiny. Who benefits from this visibility? Who bears the risks? And how should societies balance collective security with individual autonomy?

I do not offer simple answers. What I do offer is a sense that we have crossed a boundary without fully acknowledging it. Privacy has now been redefined and negotiated continuously in ways that are often invisible to the people most affected. It is well on its way to completely vanishing.

Thank you for your time and attention.

Warm regards,

One of your many data points


References: Kirchgaessner, S. (2026, January 22). Jordan used Israeli firm’s phone-cracking tool to surveil pro-Gaza activists, report finds. The Guardian. https://www.theguardian.com/world/2026/jan/22/jordan-israeli-spyware-gaza-activists

Koch, R. (2025, February 27). Authorities worldwide can see more than ever, with Big Tech as their eyes. Proton. https://proton.me/blog/big-tech-data-requests-surge

O’Brien, T. (2026, January 24). Microsoft handed the government encryption keys for customer data. The Verge. https://www.theverge.com/news/867244/microsoft-bitlocker-privacy-fbi

Venkatesh, A., & Yu, L. (2026, January 12). DHS is circumventing Constitution by buying data it would normally need a warrant to access. American Civil Liberties Union. https://www.aclu.org/news/privacy-technology/dhs-is-circumventing-constitution-by-buying-data-it-would-normally-need-a-warrant-to-access