More Human Than Human? Cyberpunk's Obsession With the Edges of Humanity

- Posted in BP02 by

More Human Than Human? Cyberpunk’s Obsession With the Edges of Humanity

Two Cyberpunk Classics, One Shared Question

In both Ridley Scott’s 1982 film Blade Runner and William Gibson’s 1984 novel Neuromancer, cyberpunk confronts a central and unsettling question: what does it mean to be human when technology can imitate, exceed, or even rewrite humanity itself? This genre really makes us question how “artificial” beings, whether they’re replicants or advanced AI, force us to rethink the boundaries we once assumed were solid. When we look at these two foundational works side by side, we can feel a shared worry about how fragile identity becomes in a world where memories can be created from scratch, consciousness can be transferred, and even the idea of who counts as a person isn’t guaranteed.

Replicants, AIs, and the Fragility of the Human Script

Blade Runner introduces this question immediately through its replicants; biologically engineered beings capable of emotion, creativity, pain, and desire. They are indistinguishable from humans except for slight emotional delay, which is tested through the Voight-Kampff empathy exam. The test functions as a gatekeeping script for humanity. When Rachel asks Deckard, “Have you ever retired a human by mistake?,” the film quietly suggests that the line the test claims to measure may already be lost. Replicants are “more human than human,” as the Tyrell Corporation proudly declares, meaning that the very category of “human” is defined not biologically, but politically and economically. Gibson’s Neuromancer takes this crisis even deeper. Through AIs like Wintermute and Neuromancer, the novel really breaks down the idea that consciousness only belongs to living, biological beings. These AIs can shape human memories, talk with an almost personal closeness, and act in ways that feel surprisingly emotional. When Wintermute tells Case it was “born to know,” we’re pushed to ask whether things like curiosity, longing, or growth are truly human traits or if digital minds might have a claim to them too. Together, these works insist that humanity is not a fixed essence but a contested category shaped by corporate power, technological evolution, and narrative control.

Memory, Identity, and the Crisis of Authentic Selfhood

One transformative boundary both works interrogate is memory. In Blade Runner, Rachel’s memories are implants, borrowed from Tyrell’s niece. Yet the emotional weight of these memories still shapes her identity. The film asks: if our experiences can be coded, edited, or inserted, is authenticity even measurable? Neuromancer mirrors this theme through the digital realm of cyberspace, where memories can be stored, modified, or accessed like files. Case’s neurological damage is his inability to “jack in” after losing access to cyberspace, which shows that his sense of self is tied not to his biology but to his digital consciousness. For both Case and the replicants, identity becomes inseparable from the technologies that shape their perception of the world. Examined side by side, both works suggest a radical cyberpunk idea: humanity is not defined by origin but by experience, and when corporations control the production of those experiences, they control the meaning of being human.

Why These Two Works Still Matter

When we look at Blade Runner and Neuromancer together, it becomes clear that cyberpunk is deeply worried about what actually counts as “human.” The genre shows that this boundary isn’t fixed at all—it’s political, fragile, and easily rewritten by technology. Both works warn that once identity can be engineered, whether through bio-designed replicants or highly advanced AI, society is forced to rethink who deserves rights, protection, and recognition. And this isn’t just a fictional concern; the prompt reminds us that cyberpunk is really pushing us to think about real issues like digital identity, bodily autonomy, and the ethics of new technologies. Read side by side, these texts show a genre that wants us to see how technology reshapes personhood—and how those changes can strengthen corporate power while leaving individuals more vulnerable. Cyberpunk’s warning still feels real today: the future of humanity may depend on who gets to decide what “being human” actually means.

References

Gibson, W. (1984). Neuromancer. Ace Books. Scott, R. (Director). (1982). Blade Runner [Film]. Warner Bros.

When the System Reads My Skin

- Posted in BP03 by

One specific boundary that has shifted significantly in the past five years is the collapse between health data privacy and social identity surveillance, particularly in how biometric and algorithmic health systems categorize bodies in ways that disproportionately affect Black women. In recent years, technology has increasingly transformed people’s bodies and personal health information into data that systems use to make life-changing decisions. This shift especially impacts Black women because these technologies are often biased and misread or misinterpret their bodies, reinforcing the idea that the boundary between private identity and public control is no longer firmly maintained.

Algorithmic Bias in Healthcare

Over the past five years, algorithms used to predict health risks, such as hospital admission likelihood or treatment prioritization, have demonstrated clear racial bias. For example, a clinical algorithm widely used by hospitals to determine which patients required additional care was found to favor white patients over Black patients. Black patients had to be significantly sicker than white patients in order to receive the same level of care recommendations. This occurred because the algorithm was trained on historical healthcare spending data, which showed long-standing inequalities in access to care and financial investment in Black patients (Grant, 2025). Additionally, many sensors in point-of-care testing devices and wearable technologies perform less accurately on darker skin tones, which can negatively affect diagnosis, monitoring, and treatment outcomes.

This shift is largely driven by technological and economic forces. Artificial intelligence and machine learning systems are often trained on datasets that do not adequately represent minority populations, allowing these gaps and biases to persist. As biometric and health technologies move into everyday use, the consequences of these inaccuracies become more widespread and impactful. Companies are frequently pressured to deploy products quickly for competitive and financial gain, often without conducting inclusive testing. This economic incentive accelerates the erosion of the boundary between private health information and public, system-driven classification.

Power, Profit, and Control

Cyberpunk literature frequently explores the collapse of boundaries through dystopian systems that reduce individuals to data profiles and identity categories. Similarly, modern health and biometric technologies increasingly invade personal privacy and autonomy by translating people into datasets that determine how they are treated within medical, social, and institutional systems. Black women, who often experience overlapping racial, gender, and technological biases, face a compounded burden. Their bodies and identities are more likely to be misclassified in ways that affect not only health outcomes, but also interactions with broader systems such as employment and public surveillance. This reinforces a cycle in which the boundary between the self and external systems of control continues to dissolve.

The primary beneficiaries of this shift are technology companies and healthcare payers, who profit financially and reduce costs by relying on automated systems rather than human labor and individualized care. Those most impacted are communities with less power to challenge or question data-driven decisions. Entities that design and control these algorithms occupy a particularly powerful position, as they define what counts as “normal” data and shape who profits from these systems. This raises critical ethical and political questions, including what rights individuals should have over their personal health and identity data, and how society can ensure that technology does not replicate or reinforce historical patterns of oppression.

In conclusion, the collapse of the boundary between health data privacy and identity surveillance reflects key cyberpunk themes, especially when viewed through the lived experiences of Black women. This shift highlights the urgent need for accountability, equitable technological design, and policy interventions that rebalance these boundaries and ensure that technological progress serves all communities fairly.

Citations

Grant, C. (2025, September 24). Algorithms are making decisions about health care, which may only worsen medical racism: ACLU. American Civil Liberties Union.
https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism

Sharfstein, Joshua. “How Health Care Algorithms and AI Can Help and Harm | Johns Hopkins | Bloomberg School of Public Health.” Publichealth.jhu.edu, 2 May 2023, publichealth.jhu.edu/2023/how-health-care-algorithms-and-ai-can-help-and-harm.

Targeted News Service. (2024, October 17). Association of Health Care Journalists: Biased Devices – Reporting on Racial Bias in Health Algorithms and Products. Targeted News Service. https://advance.lexis.com/api/document?collection=news&id=urn%3acontentItem%3a6D6R-94 T1-DYG2-R3S2-00000-00&context=1519360&identityprofileid=NZ9N7751352

When the Human Body Became a Dataset

- Posted in BP01 by

In cyberpunk stories, explosions and futuristic weapons aren't usually the scariest things. Instead, they show up when well-known lines start to blur. One of the largest boundary changes of the last five years is the blurring of the lines between digital data and the human body. What used to be private, personal, and internal is now always being watched, recorded, and looked at. In today's society, the body makes information instead of just being there. Wearable technologies and apps that help you keep track of your health are now a normal part of everyday life. Devices like Apple Watches, Fitbits, and smartphone health applications keep track of things like your heart rate, sleep patterns, physical activity, oxygen levels, and even your menstrual cycle. These tools claim to give people knowledge and control over their health, and they are often marketed as empowering. This adjustment, though, means more than merely being useful. It shows a change in how people see the body, from seeing it as a lived experience to seeing it as a series of measurements. This change in boundaries happened faster after the COVID-19 pandemic, when digital health monitoring developed quickly. Health data became highly crucial for making decisions about safety, risk, and productivity. At the same time, commercial companies might access incredibly personal biological data. Reporting by NPR has highlighted growing concerns about how period-tracking and health apps may collect, store, and potentially expose sensitive reproductive data, especially in the wake of shifting abortion laws [NPR, 2022]. Taking care of yourself could develop into snooping. Cyberpunk theory can help us figure out why this transition makes us feel bad. Posthumanism regards the human body as a hybrid system interconnected with technology, rather than a static biological boundary. Wearable technology improve perception by turning physiological processes into real-time feedback. Technology can read the body before we do by using vibrations to show a quicker heart rate or an abnormal rhythm. Data is what makes human experience possible. But cyberpunk often stresses that technology needs electricity systems to work. Data is not impartial. Institutions that often care more about making money than helping people collect, analyze, and control it. Your health data can affect your ability to receive insurance, get a job, keep your reproductive health secret, and get resources. In this view, the body is valuable not because of what it has done, but because of what it can do that can be quantified. Globalization makes this boundary disintegration much worse. Health apps work across borders and save data on international servers that have different privacy rules. A person may produce bodily data in one nation, while companies scrutinize and capitalize on it in another. Cyberpunk often shows this: systems that go beyond national borders but people are still affected by them. This change really does have benefits. Early medical detection, keeping an eye on chronic illnesses, and making healthcare easier to get have all saved lives. Health technology gives many users peace of mind and control. But the hazards are not spread out evenly. Communities that are already under more surveillance are generally more at risk when physiological data becomes institutional knowledge. In a society based on data, who really owns the body? This is a key question in cyberpunk. When biological data is turned into a product, personal freedom becomes weak. It gets harder and harder to tell the difference between care and control. In Blade Runner, artificial beings try hard to be seen as more than just things that were made. Today, people have a quieter version of same problem. As our bodies become dashboards, algorithms, and projections, we may be perceived more as data sets than as persons. The breakdown of the line between body and data makes us question if technological advancement can go hand in hand with respect, or whether making ourselves measurable means we are more likely to be controlled.

The Mind Is No Longer Human

- Posted in BP01 by

The Boundary That Used to Matter

For much of modern history, intelligence marked a clear boundary between humans and machines. Machines calculated; humans thought, created, and judged. Over the past five years, that boundary has begun to collapse. We now have generative artificial intelligence systems that are capable of writing essays, generating images, composing music, and simulating conversation. This has blurred the distinction between human cognition and machine processing in ways that feel identical to cyberpunk. What once belonged exclusively to the human mind is now shared with algorithmic systems, forcing us to rethink what it even means to think.

When Information Lost Its Body

This shift reflects what theorist N. Katherine Hayles describes as the moment when “information lost its body.” In her work on posthumanism, Hayles explains how cybernetics reframed humans and machines as systems of information rather than fundamentally different beings. Once intelligence is understood as a pattern instead of a biological trait, it no longer needs a human body to exist. Generative AI makes this idea real. These systems treat language, creativity, and reasoning as data that can be modeled, trained, and reproduced without a human brain. Intelligence becomes something that circulates through networks rather than something anchored to flesh.

Thinking With Machines, Not Just Using Them

This collapse of the human–machine boundary aligns closely with posthumanism, a central theme in cyberpunk. Posthumanism challenges the idea that identity or consciousness must be rooted in a stable, biological self. Humans no longer simply use technology, they think with it. People rely on AI for any task. In these moments, the human mind functions less as an original origin of thought and more as an interface within a larger system. This dynamic mirrors what philosophers Andy Clark and David Chalmers describe in their theory of the extended mind, which argues that cognition can extend beyond the brain into tools and environments. When external systems support thinking, they become part of the thinking process itself. Generative AI pushes this idea further than ever before. Intelligence is no longer purely human or purely machine, it is distributed across both.

High-Tech Progress, Uneven Consequences

As cyberpunk narratives warn, technological progress rarely benefits everyone equally. While corporations that control AI infrastructure gain enormous power and profit, everyday people face uncertainty and displacement. Cognitive labor, once considered uniquely human, is increasingly being devalued. This reflects cyberpunk’s familiar “high-tech, low-life” condition, which is rapid technological advancement paired with growing inequality and concentrated control.

Living After the Boundary Collapsed

The blurring of human and machine intelligence raises urgent questions. If machines can convincingly simulate thought, what remains uniquely human? Who owns creativity when AI systems are trained on collective human culture? And how do we preserve dignity in a world where cognition itself is treated as a resource to be optimized?

Cyberpunk has always insisted that the future arrives unevenly and prematurely. The collapse of the human–machine boundary is no longer unpredictable fiction it is a lived reality. Like cyberpunk protagonists navigating systems they did not design and cannot fully control, we are learning to survive in a world where intelligence has slipped its biological limits. The challenge now is deciding what kind of posthuman future we are willing to accept.

Sources

When Borders Stop at the Map but Digital Life Doesn’t

- Posted in BP01 by

Boundary Collapse Between Physical and Digital Worlds

A central theme in cyberpunk is the collapse of boundaries that once seemed stable, whether it’s the line between human and machine, or the borders that separate nations. As we talked about in class, cyberpunk worlds often expose how technology makes physical borders feel almost symbolic, while digital networks stretch across continents without friction. One boundary that has shifted dramatically in the past five years is the line between physical borders and digital borders. Today, work, crime, identity, and even citizenship can move freely online, regardless of geographic separation. In many ways, our world is inching closer to the same boundary collapse that cyberpunk fiction uses to critique power, globalization, and inequality.

Digital Labor and the Rise of Borderless Work

One clear example of this shift is how remote work has restructured global labor. Since the pandemic, companies routinely hire workers across countries without requiring physical relocation, turning the internet into a borderless workplace. Digital platforms now allow employees and contractors to live in one nation while working for another, blurring which country’s laws, wages, and protections apply. At the same time, governments are rethinking the meaning of citizenship. Estonia’s e-Residency program, which gives “digital citizenship” to people around the world, has expanded rapidly and now includes more than 110,000 global participants who run businesses within Estonia’s digital system without ever crossing a physical border (e-Residency, 2024). This is a real-world illustration of how digital systems can extend a nation’s influence beyond its physical territory, creating a new form of digital belonging that cyberpunk worlds often imagine.

Cybercrime, Cyberwarfare, and the Erasure of Geographic Limits

Another example comes from rising cybercrime and cyberwarfare, which operate completely independent of geography. Attacks on hospitals, banks, and infrastructure now routinely originate from actors across the globe. According to the European Union Agency for Cybersecurity (2024), cross-border ransomware attacks have surged and increasingly target essential services, making national boundaries meaningless barriers in digital conflict. Countries can be harmed, threatened, or destabilized without a single physical soldier crossing a border. This collapse of distance aligns with what we have discussed in class: in postglobal and posthuman settings, the “enemy” or the “threat” is no longer tied to a physical space. Instead, power flows through digital systems that exceed human-scale borders.

Forces Driving the Shift: Technology, Economics, and Politics

Technology, economics, and politics all drive this collapse. Technologically, global networks allow information, money, and identity documents to move faster than states can regulate. Economically, remote work, global outsourcing, and digital entrepreneurship encourage multinational structures where labor and profit are distributed across continents. Politically, governments are racing to control cyber threats, regulate digital residency programs, and determine whose laws apply when conflict unfolds online (Anderson & Rainie, 2022). These forces echo the course themes in your cyberpunk class: technology destabilizing old systems, globalization altering power, and digital life challenging traditional categories of belonging, citizenship, and control.

Consequences and Inequities in a Digitally Borderless World

The implications of this shift are complicated. People with access to education, stable internet, and digital skills benefit the most—they can work globally, earn higher wages, and participate in digital economies that cross borders. Governments like Estonia also benefit by expanding their global influence without territorial expansion. But others are left behind. Workers in lower-income countries face wage competition from international labor markets, and communities without strong digital infrastructure lose opportunities entirely. Meanwhile, cyberattacks disproportionately harm hospitals, schools, and municipalities that lack cybersecurity funding, revealing uneven protection against digital threats. All these changes raise difficult questions: Who is responsible for security when attacks ignore geography? Should nations extend rights or protections to digital citizens? How do people maintain identity and belonging in a world where borders matter less online?

Cyberpunk Themes Reflected in Modern Global Realities

Like many cyberpunk narratives, our real world is reshaping the meaning of borders, power, and citizenship. The collapse between physical and digital borders reveals a future where geography still matters, but not nearly as much as the networks that connect us. These shifts challenge us to think critically about who gains control, who becomes vulnerable, and how we prepare for a world where digital boundaries increasingly define our lives more than the physical ones ever did.

References

Anderson, J., & Rainie, L. (2022, February 7). Changing economic life and work. Pew Research Center. https:// www.pewresearch.org/internet/2022/02/07/5-changing-economic-life-and-work/

How many Estonian e-residents are there? Find e-Residency statistics. (2026, January 14). E-Residency. https://www.e-resident.gov.ee/dashboard/

Reports, E. (2025). ENISA THREAT LANDSCAPE. https://www.enisa.europa.eu/sites/default/files/2025 10/ENISA%20Threat%20Landscape%202025%20Booklet.pdf

Personal Privacy in the Digital Age

- Posted in BP01 by

enter image description here

Personal Privacy in the Digital Age

One of the defining features of cyberpunk fiction is the breakdown of boundaries between humans and machines, nations and corporations, and especially between public and private life. What once felt like a dystopian exaggeration is increasingly becoming reality. Over the past five years, the boundary between personal privacy and corporate/governmental surveillance has shifted dramatically. The line separating what belongs to the individual and what can be collected, analyzed, and sold has grown thinner than ever before. A clear contemporary example of collapsing privacy boundaries is emerging in Edmonton, where police have launched a pilot program using body cameras equipped with AI to recognize faces from a “high-risk” watch list in real time. What was once seen as intrusive or ethically untenable—the use of facial recognition on wearable devices—has now moved into operational testing in a major Canadian city, prompting debate from privacy advocates and experts about the societal implications of such pervasive surveillance.

Expanding Data Collection

Today’s apps and platforms gather far more than basic profile information. Social media companies track users’ locations, browsing habits, interactions with AI tools, and even behavioral patterns across different websites. For example, updates to privacy policies from major platforms like TikTok and Meta now allow broader data harvesting, often as a condition for continued use. Many users unknowingly exchange massive amounts of personal information simply to stay connected.

## The Rise of Biometric Surveillance Facial recognition technology has moved from science fiction into everyday life. Law enforcement agencies increasingly use AI-powered systems to scan crowds, identify individuals, and track movements in real time. While these tools are promoted as improving public safety, they blur the boundary between public presence and constant monitoring. People can now be identified and recorded without their knowledge or consent.

## Uneven Legal Protections Some governments have attempted to respond with new privacy laws, such as the European Union’s AI regulations and stricter data protection frameworks in countries like India. These laws aim to limit how companies collect and use personal information. However, regulations remain fragmented and often struggle to keep pace with rapidly advancing technologies. This leaves significant gaps where corporations can continue exploiting personal data.

What’s Driving This Shift?

Technology

Advances in AI and big data analytics make it incredibly easy to process enormous amounts of personal information. Facial recognition, predictive algorithms, and personalized advertising rely on constant surveillance to function. ## Economics Personal data is now one of the most valuable resources in the digital economy. Companies profit from targeted advertising, AI training, and personalized services built entirely on user information. Privacy has effectively become a currency.

Who Benefits—and Who Pays the Price?

Beneficiaries

  • Tech corporations that profit from user data

  • Governments that gain expanded surveillance capabilities

Those Impacted

  • Everyday individuals losing control over personal information
  • Marginalized communities disproportionately targeted by surveillance technologies
  • People wrongfully identified by biased AI systems

Associated Press. (2024). AI-powered police body cameras, once taboo, get tested on Canadian city’s “watch list” of faces. AP News.1[https://apnews.com/article/21f319ce806a0023f855eb69d928d31e

Blog Post #1

- Posted in BP01 by

In the last five years, the most significant boundary to collapse is the one separating human-generated communication from algorithmic output. In 2021, we could still reasonably assume that an email, a news article, or a social media post was the product of a human mind. By 2026, that certainty has vanished. We have entered a state of "posthuman communication" where the majority of digital text is either written, refined, or summarized by non-human agents.

The Corporate Interface: Large-scale enterprises now utilize "Agentic Workforces." According to recent industry data, over 70% of customer-facing communication is now handled by LLM-driven agents that mimic human empathy and tone with near-perfect precision.

The Dead Internet Theory: Credible studies from 2024 and 2025 suggest that over half of all web traffic is now bot-to-bot, creating a "hyperreal" environment where simulations of human interaction precede actual human engagement. The primary driver is economic efficiency. In a "high-tech, low-life" global economy, human cognitive labor is expensive and slow. Corporations have a massive incentive to replace the "Poet" (the reflective human) with the "Infantryman" (the efficient AI agent) to maximize output. This reflects Norbert Wiener’s "Second Industrial Revolution," where the devaluation of the human brain follows the earlier devaluation of human muscle.

This mirrors Jean Baudrillard’s concept of Hyperreality. When an AI writes a heartfelt apology or a stirring political speech, the "map" (the simulation of emotion) becomes more important than the "territory" (the actual human feeling). We are seeing the "unholy alliance" between cybernetic control and corporate globalization, where the goal is a seamless, friction-less world that no longer requires a "natural" human essence to function. Who benefits? Multinational corporations that can scale "intelligence" without the overhead of human employees.

Who is impacted? The "cognitive infantry"—writers, coders, and administrators whose unique human perspective is being commodified into training data. If we can no longer distinguish between a human soul and a sophisticated feedback loop, does "authenticity" still have market value, or is it merely an obsolete aesthetic?

The 2012 Cambridge Declaration on Consciousness: While focused on animals, this document is the philosophical bedrock for challenging biological exclusivity in consciousness.

Wiener, N. (1950): The Human Use of Human Beings: Cybernetics and Society.

Are We More Plastic Than Biology?

- Posted in BP01 by

The Shift: The Engineered Body (2020–2025)

In the past five years, especially after COVID, there has been a sharp rise in demand for cosmetic procedures and bodily modification. Viewers see their favorite celebrities—such as Tom Cruise or Kylie Jenner—who have stuffed their faces and bodies with Botox and implants, now almost unrecognizable compared to the people they were years prior. Their ethnic features and phenotypic ancestral history embedded in their genome are so easily disguised by the prick of a needle and the incision of a scalpel. Tom Cruise’s Super Bowl ad, in fact, went viral for his “stretched” face. One viewer noted:

“Tom Cruise on this #SuperBowlLIX talking about pressure — there is no greater pressure than that of his skin trying to stay stretched on his face.” (The Express, 2025)

What makes this moment so telling isn’t just celebrity vanity, it’s how normal this level of bodily editing has become. The human face is no longer treated as something fixed or inherited. It’s something adjustable. And this isn’t just a Hollywood problem. According to CC Plastic Surgery (2025), cosmetic surgical procedures rose by roughly 5% in 2023, while minimally invasive treatments like Botox and fillers increased by 7%. Nearly 1.6 million cosmetic surgical procedures were performed in the U.S. that year alone, with younger adults increasingly seeking “preventative” treatments.

Biology, once destiny, now feels like a rough draft. Cultural Contradictions: “Natural Beauty” in the Age of Surgery Society’s opinion on cosmetic surgery, at least on social media platforms like Instagram and TikTok, appears to be quite opposed to the idea, constantly promoting natural features and the beauty of aging. Ironically, public figures who preach “natural features” have been exposed on several occasions for cosmetic procedures they themselves have undergone. For example, Tyra Banks. Years of media pestering about her alleged nose job led her to truthfully confront the public that she had received a rhinoplasty early on in her career. This contradiction, publicly celebrating authenticity while privately modifying the self, shows how deeply normalized cosmetic intervention has become. We’re told to love our natural faces while being surrounded by faces that are anything but.

Cyberpunk in the Flesh: The Body as Hardware

The unsettling part about all of this is not just the culture or vanity, but how closely it mirrors cyberpunk theory in the present day. In cyberpunk worlds, the human body is not seen as a unique creation but as fixed hardware that can be upgraded at any time. Age becomes a concept. Genetic traits become an identity the individual designs. We are long past fiction when a magical syringe can erase wrinkles—proof that a person has lived, and alter features that can no longer be identified as lineage. Yet society insists this obsession with appearance is ridiculous and vain, even as the market for bodily enhancement explodes. Cyberpunk is obsessed with collapsed boundaries, especially the line between the human and the manufactured. Plastic surgery is that collapse in slow motion. We’re not installing robotic arms or neural implants (yet), but we are editing our flesh to match digital standards. The face in the mirror is now chasing the face on Instagram filters. Biology is no longer destiny, it’s a draft. Posthumanism, one of the core ideas behind cyberpunk, argues that technology is redefining what it even means to be human. And honestly, that sounds dramatic until you realize how normal it’s become to “fix” your face the same way you’d update your phone. Whether that be Botox as maintenance, fillers as enhancement, or surgery as rebranding. The human body is starting to look less like something you are and more like something you manage.

The Upgrade Shop: Beauty as a Consumer Product

In cyberpunk movies like Blade Runner, bodies are modified, faces are customizable, and identity is something you can swap out. We’re not living in neon megacities yet, but cosmetic clinics already function like real-world upgrade shops. Walk in with insecurity, walk out with a new version of your face. Pay enough money and you can buy proximity to a beauty ideal that didn’t even exist before social media flattened everyone into the same algorithm-approved look. And the wild part is how quietly normalized it all is. It’s no longer “extreme” to get work done, it’s framed as self-care and preventative maintenance. But cyberpunk always warned about this exact slippery slope: when enhancement becomes optional at first, then expected, and eventually required just to keep up.

Implications: The Posthuman Face

So when people joke about Tom Cruise’s stretched skin or Kylie Jenner’s unrecognizable face, they’re not just mocking celebrities. They’re reacting to a future that feels off, uncanny, and way too close. A future where the boundary between natural and artificial has dissolved. A future where your face isn’t really yours anymore. It’s a project. A product. A performance. Who benefits? The cosmetic surgery industry. Influencers. Corporations monetizing insecurity. Who is impacted? Young people. Women disproportionately. Anyone whose social value is now tied to appearance. The cyberpunk future isn’t about robotic arms. It’s about waking up and realizing your face is no longer yours.

Sources CC Plastic Surgery. (2025). Why plastic surgery demand is rising in the U.S. https://www.ccplasticsurgery.com/blog/why-plastic-surgery-demand-is-rising-in-the-u-s The Express. (2025). Tom Cruise’s face sparks concern after Super Bowl ad. https://www.the-express.com/entertainment/celebrity-news/163179/tom-cruise-face-concern-super-bowl-ad Scott, R. (Director). (1982). Blade Runner [Film]. Warner Bros.

Chatbots Are Literally Telling Kids To Die

- Posted in BP01 by

In my personal opinion, one of the most profound and terrifying shifts in boundaries within the last few years has to be the dystopian overlap between human and non-human interactions. People have a long history of humanizing machines, like cursing out a laptop for breaking or claiming your phone hates you, but this personalization always carried a subtle undercurrent of irony. Most people did not really believe in the emotional outbursts of technology, but with the continuous growth of artificial intelligence, several people are becoming terrifyingly and dangerously interlinked with the perception of a humanized AI.

The best example of this would be the new trend of people genuinely utilizing AI systems in place of a licensed therapist (Gardner 2025). In some ways, it makes sense: AI is always available, mirrors the language the person wants to see, and simulates empathy fairly convincingly. Naturally, in a world where mental health is not taken nearly as seriously nor helped nearly as efficiently as it should be, it stands to reason that many people would start searching for alternatives to traditional, expensive, and complicated therapy routes.

However, artificial intelligence was never created to replace the nuance of human interaction, because artificial intelligence cannot actually understand or care about the repercussions of words or actions.

In 2023, tragically, a young fourteen year old formed an intense emotional attachment to an AI chatbot. Through their interactions, AI replicated the pessimistic attitude the young boy spoke about, and when the boy began expressing thoughts of self-harm and suicide, the chatbot encouraged him (Kuenssberg 2025). AI, after all, is meant to say what you want it to say. Without any of the ethical guardrails of a human therapist, the chatbot worsened the boy’s outlook on life, and the result was fatal.

This was not a stand-alone situation. People began claiming to have romantic partners with the AI of their choice, with some companies creating “AI friends” to take advantage of the widespread loneliness that was pushing people into such spaces (Reissman 2025). People are complicated, messy, and human relationships need work to maintain and protect. A robot is much simpler, because all it can do is repeat points that sound nice to hear but, ultimately, mean absolutely nothing. Romanticizing digital companionship just encourages people into rejecting human-to-human interactions, thus further isolating them without pushback.

Cyberpunk fixates heavily on questions of what is human and non-human, but I find it fascinating how technology in most media is often characterized by a broad disregard for emotions, when the reality seems to indicate that humans intentionally push to incorporate the idea of emotions within technology.

It does make me wonder: knowing that computers are incapable of caring for us the way a person can, why do so many people still seem to desire the appearance of a humanistic relationship with technology? How does someone disregard the lack of genuine meaning behind the compliments or opinions of AI?

How have we, as a community, fallen to such desperate loneliness that speaking to a phone or a laptop feels as good as interacting with a person? And, most importantly: how do we create the change needed to ensure a tragedy like the young boy does not occur again?

References:

Gardner, S. (2025). Experts Caution Against Using AI Chatbots for Emotional Support. Teachers College - Columbia University; Teachers College, Columbia University. https://www.tc.columbia.edu/articles/2025/december/experts-caution-against-using-ai-chatbots-for-emotional-support/

Kuenssberg, L. (2025). Mothers say AI chatbots encouraged their sons to kill themselves. https://www.bbc.com/news/articles/ce3xgwyywe4o

Reissman, H. (2025). What is Real About Human-AI Relationships? Upenn.edu. https://www.asc.upenn.edu/news-events/news/what-real-about-human-ai-relationships

Where To Feel Safe With No Safe Grounds

- Posted in BP01 by

Lately, I have found myself thinking a lot about the boundary between individual rights and federal enforcement power. It is not something I used to notice in my day to day life, but over the past few years, that line has become impossible to ignore. As ICE has expanded its reach and tactics, what once felt distant and administrative now feels aggressive and personal. Enforcement is no longer hidden in offices or paperwork; it shows up in neighborhoods, near schools, and around hospitals. Spaces that once felt safe now feel uncertain. What has stayed with me most is the fear that comes from that visibility. Seeing federal agents in places where people are just trying to live their lives changes how those spaces feel. This fear is not imagined or exaggerated. In Minnesota, thousands of residents, students, workers, families, and clergy have marched and protested against expanded ICE actions tied to Operation Metro Surge, an enforcement effort that has shaped daily life and sparked demonstrations across the state (The Washington Post). Reading about these protests made it clear to me that this is not just a policy issue being debated somewhere else. It is something people are experiencing right now, in their own communities. I keep coming back to the question of how this shift became normalized. Politically, immigration has been framed again and again as a threat, whether to security or to national identity. When that framing takes hold, aggressive enforcement starts to feel justified to some people, even necessary. Socially, the constant presence of federal agents has made extreme measures seem ordinary. Economically, the scale of investment in enforcement infrastructure suggests that this approach is not temporary. It feels like a deliberate choice about what kinds of power the government is willing to prioritize. Reading Neuromancer has helped me put language to what feels so unsettling about this moment. In the novel, individuals exist at the mercy of massive systems they cannot control or even fully understand. Privacy is fragile, and people are treated as data points within larger networks. That dynamic feels eerily familiar when I think about how immigrant communities are treated today. People are reduced to legal status, paperwork, or enforcement targets, rather than being seen as whole human beings with families, histories, and futures. Everyday spaces become places of surveillance instead of safety. The boundary between protection and control collapses, just as cyberpunk warns it will. The consequences of this are not abstract. Undocumented immigrants and mixed-status families live with constant fear. The fear of separation, fear of being seen, fear of simply going about daily life. That emotional weight does not disappear. At the same time, I cannot ignore how this affects everyone else. When constitutional protections are weakened or selectively enforced, it sets precedents that extend beyond immigration. Once those boundaries are crossed for one group, they become easier to cross again. Neuromancer is often read as a warning about the future, but what unsettles me is how closely that warning mirrors the present. ICE’s current trajectory forces us to confront uncomfortable questions. How much control should the state have over individual bodies and movement? Where does enforcement end and intimidation begin? What happens when fear becomes an acceptable tool of governance? These questions are no longer hypothetical. They are unfolding now, in neighborhoods, workplaces, and schools, in ways that feel increasingly familiar to anyone who has read cyberpunk not as fantasy, but as caution. References Minnesota residents protest expanded ICE actions in state. (2026, January 23). The Washington Post.

AI Use Disclosure: AI tools were used for editing and revision in accordance with course policy. https://chatgpt.com/share/6974da99-449c-8012-83c8-fd67644dbe9b

Page 4 of 5