When the Human Body Became a Dataset

- Posted in BP01 by

In cyberpunk stories, explosions and futuristic weapons aren't usually the scariest things. Instead, they show up when well-known lines start to blur. One of the largest boundary changes of the last five years is the blurring of the lines between digital data and the human body. What used to be private, personal, and internal is now always being watched, recorded, and looked at. In today's society, the body makes information instead of just being there. Wearable technologies and apps that help you keep track of your health are now a normal part of everyday life. Devices like Apple Watches, Fitbits, and smartphone health applications keep track of things like your heart rate, sleep patterns, physical activity, oxygen levels, and even your menstrual cycle. These tools claim to give people knowledge and control over their health, and they are often marketed as empowering. This adjustment, though, means more than merely being useful. It shows a change in how people see the body, from seeing it as a lived experience to seeing it as a series of measurements. This change in boundaries happened faster after the COVID-19 pandemic, when digital health monitoring developed quickly. Health data became highly crucial for making decisions about safety, risk, and productivity. At the same time, commercial companies might access incredibly personal biological data. Reporting by NPR has highlighted growing concerns about how period-tracking and health apps may collect, store, and potentially expose sensitive reproductive data, especially in the wake of shifting abortion laws [NPR, 2022]. Taking care of yourself could develop into snooping. Cyberpunk theory can help us figure out why this transition makes us feel bad. Posthumanism regards the human body as a hybrid system interconnected with technology, rather than a static biological boundary. Wearable technology improve perception by turning physiological processes into real-time feedback. Technology can read the body before we do by using vibrations to show a quicker heart rate or an abnormal rhythm. Data is what makes human experience possible. But cyberpunk often stresses that technology needs electricity systems to work. Data is not impartial. Institutions that often care more about making money than helping people collect, analyze, and control it. Your health data can affect your ability to receive insurance, get a job, keep your reproductive health secret, and get resources. In this view, the body is valuable not because of what it has done, but because of what it can do that can be quantified. Globalization makes this boundary disintegration much worse. Health apps work across borders and save data on international servers that have different privacy rules. A person may produce bodily data in one nation, while companies scrutinize and capitalize on it in another. Cyberpunk often shows this: systems that go beyond national borders but people are still affected by them. This change really does have benefits. Early medical detection, keeping an eye on chronic illnesses, and making healthcare easier to get have all saved lives. Health technology gives many users peace of mind and control. But the hazards are not spread out evenly. Communities that are already under more surveillance are generally more at risk when physiological data becomes institutional knowledge. In a society based on data, who really owns the body? This is a key question in cyberpunk. When biological data is turned into a product, personal freedom becomes weak. It gets harder and harder to tell the difference between care and control. In Blade Runner, artificial beings try hard to be seen as more than just things that were made. Today, people have a quieter version of same problem. As our bodies become dashboards, algorithms, and projections, we may be perceived more as data sets than as persons. The breakdown of the line between body and data makes us question if technological advancement can go hand in hand with respect, or whether making ourselves measurable means we are more likely to be controlled.

The Mind Is No Longer Human

- Posted in BP01 by

The Boundary That Used to Matter

For much of modern history, intelligence marked a clear boundary between humans and machines. Machines calculated; humans thought, created, and judged. Over the past five years, that boundary has begun to collapse. We now have generative artificial intelligence systems that are capable of writing essays, generating images, composing music, and simulating conversation. This has blurred the distinction between human cognition and machine processing in ways that feel identical to cyberpunk. What once belonged exclusively to the human mind is now shared with algorithmic systems, forcing us to rethink what it even means to think.

When Information Lost Its Body

This shift reflects what theorist N. Katherine Hayles describes as the moment when “information lost its body.” In her work on posthumanism, Hayles explains how cybernetics reframed humans and machines as systems of information rather than fundamentally different beings. Once intelligence is understood as a pattern instead of a biological trait, it no longer needs a human body to exist. Generative AI makes this idea real. These systems treat language, creativity, and reasoning as data that can be modeled, trained, and reproduced without a human brain. Intelligence becomes something that circulates through networks rather than something anchored to flesh.

Thinking With Machines, Not Just Using Them

This collapse of the human–machine boundary aligns closely with posthumanism, a central theme in cyberpunk. Posthumanism challenges the idea that identity or consciousness must be rooted in a stable, biological self. Humans no longer simply use technology, they think with it. People rely on AI for any task. In these moments, the human mind functions less as an original origin of thought and more as an interface within a larger system. This dynamic mirrors what philosophers Andy Clark and David Chalmers describe in their theory of the extended mind, which argues that cognition can extend beyond the brain into tools and environments. When external systems support thinking, they become part of the thinking process itself. Generative AI pushes this idea further than ever before. Intelligence is no longer purely human or purely machine, it is distributed across both.

High-Tech Progress, Uneven Consequences

As cyberpunk narratives warn, technological progress rarely benefits everyone equally. While corporations that control AI infrastructure gain enormous power and profit, everyday people face uncertainty and displacement. Cognitive labor, once considered uniquely human, is increasingly being devalued. This reflects cyberpunk’s familiar “high-tech, low-life” condition, which is rapid technological advancement paired with growing inequality and concentrated control.

Living After the Boundary Collapsed

The blurring of human and machine intelligence raises urgent questions. If machines can convincingly simulate thought, what remains uniquely human? Who owns creativity when AI systems are trained on collective human culture? And how do we preserve dignity in a world where cognition itself is treated as a resource to be optimized?

Cyberpunk has always insisted that the future arrives unevenly and prematurely. The collapse of the human–machine boundary is no longer unpredictable fiction it is a lived reality. Like cyberpunk protagonists navigating systems they did not design and cannot fully control, we are learning to survive in a world where intelligence has slipped its biological limits. The challenge now is deciding what kind of posthuman future we are willing to accept.

Sources

Hayles, N. (n.d.). HOW W E BECAME POSTHUMAN Virtual Bodies in Cybernetics, Literature, and Informatics. https://arl.human.cornell.edu/linked%20docs/Hayles-Posthuman-excerpts.pdf ‌ Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19. http://www.jstor.org/stable/3328150

When Borders Stop at the Map but Digital Life Doesn’t

- Posted in BP01 by

Boundary Collapse Between Physical and Digital Worlds

A central theme in cyberpunk is the collapse of boundaries that once seemed stable, whether it’s the line between human and machine, or the borders that separate nations. As we talked about in class, cyberpunk worlds often expose how technology makes physical borders feel almost symbolic, while digital networks stretch across continents without friction. One boundary that has shifted dramatically in the past five years is the line between physical borders and digital borders. Today, work, crime, identity, and even citizenship can move freely online, regardless of geographic separation. In many ways, our world is inching closer to the same boundary collapse that cyberpunk fiction uses to critique power, globalization, and inequality.

Digital Labor and the Rise of Borderless Work

One clear example of this shift is how remote work has restructured global labor. Since the pandemic, companies routinely hire workers across countries without requiring physical relocation, turning the internet into a borderless workplace. Digital platforms now allow employees and contractors to live in one nation while working for another, blurring which country’s laws, wages, and protections apply. At the same time, governments are rethinking the meaning of citizenship. Estonia’s e-Residency program, which gives “digital citizenship” to people around the world, has expanded rapidly and now includes more than 110,000 global participants who run businesses within Estonia’s digital system without ever crossing a physical border (e-Residency, 2024). This is a real-world illustration of how digital systems can extend a nation’s influence beyond its physical territory, creating a new form of digital belonging that cyberpunk worlds often imagine.

Cybercrime, Cyberwarfare, and the Erasure of Geographic Limits

Another example comes from rising cybercrime and cyberwarfare, which operate completely independent of geography. Attacks on hospitals, banks, and infrastructure now routinely originate from actors across the globe. According to the European Union Agency for Cybersecurity (2024), cross-border ransomware attacks have surged and increasingly target essential services, making national boundaries meaningless barriers in digital conflict. Countries can be harmed, threatened, or destabilized without a single physical soldier crossing a border. This collapse of distance aligns with what we have discussed in class: in postglobal and posthuman settings, the “enemy” or the “threat” is no longer tied to a physical space. Instead, power flows through digital systems that exceed human-scale borders.

Forces Driving the Shift: Technology, Economics, and Politics

Technology, economics, and politics all drive this collapse. Technologically, global networks allow information, money, and identity documents to move faster than states can regulate. Economically, remote work, global outsourcing, and digital entrepreneurship encourage multinational structures where labor and profit are distributed across continents. Politically, governments are racing to control cyber threats, regulate digital residency programs, and determine whose laws apply when conflict unfolds online (Anderson & Rainie, 2022). These forces echo the course themes in your cyberpunk class: technology destabilizing old systems, globalization altering power, and digital life challenging traditional categories of belonging, citizenship, and control.

Consequences and Inequities in a Digitally Borderless World

The implications of this shift are complicated. People with access to education, stable internet, and digital skills benefit the most—they can work globally, earn higher wages, and participate in digital economies that cross borders. Governments like Estonia also benefit by expanding their global influence without territorial expansion. But others are left behind. Workers in lower-income countries face wage competition from international labor markets, and communities without strong digital infrastructure lose opportunities entirely. Meanwhile, cyberattacks disproportionately harm hospitals, schools, and municipalities that lack cybersecurity funding, revealing uneven protection against digital threats. All these changes raise difficult questions: Who is responsible for security when attacks ignore geography? Should nations extend rights or protections to digital citizens? How do people maintain identity and belonging in a world where borders matter less online?

Cyberpunk Themes Reflected in Modern Global Realities

Like many cyberpunk narratives, our real world is reshaping the meaning of borders, power, and citizenship. The collapse between physical and digital borders reveals a future where geography still matters, but not nearly as much as the networks that connect us. These shifts challenge us to think critically about who gains control, who becomes vulnerable, and how we prepare for a world where digital boundaries increasingly define our lives more than the physical ones ever did.

References

Anderson, J., & Rainie, L. (2022, February 7). Changing economic life and work. Pew Research Center. https:// www.pewresearch.org/internet/2022/02/07/5-changing-economic-life-and-work/

How many Estonian e-residents are there? Find e-Residency statistics. (2026, January 14). E-Residency. https://www.e-resident.gov.ee/dashboard/

Reports, E. (2025). ENISA THREAT LANDSCAPE. https://www.enisa.europa.eu/sites/default/files/2025 10/ENISA%20Threat%20Landscape%202025%20Booklet.pdf

Personal Privacy in the Digital Age

- Posted in BP01 by

enter image description here

Personal Privacy in the Digital Age

One of the defining features of cyberpunk fiction is the breakdown of boundaries between humans and machines, nations and corporations, and especially between public and private life. What once felt like a dystopian exaggeration is increasingly becoming reality. Over the past five years, the boundary between personal privacy and corporate/governmental surveillance has shifted dramatically. The line separating what belongs to the individual and what can be collected, analyzed, and sold has grown thinner than ever before. A clear contemporary example of collapsing privacy boundaries is emerging in Edmonton, where police have launched a pilot program using body cameras equipped with AI to recognize faces from a “high-risk” watch list in real time. What was once seen as intrusive or ethically untenable—the use of facial recognition on wearable devices—has now moved into operational testing in a major Canadian city, prompting debate from privacy advocates and experts about the societal implications of such pervasive surveillance.

Expanding Data Collection

Today’s apps and platforms gather far more than basic profile information. Social media companies track users’ locations, browsing habits, interactions with AI tools, and even behavioral patterns across different websites. For example, updates to privacy policies from major platforms like TikTok and Meta now allow broader data harvesting, often as a condition for continued use. Many users unknowingly exchange massive amounts of personal information simply to stay connected.

## The Rise of Biometric Surveillance Facial recognition technology has moved from science fiction into everyday life. Law enforcement agencies increasingly use AI-powered systems to scan crowds, identify individuals, and track movements in real time. While these tools are promoted as improving public safety, they blur the boundary between public presence and constant monitoring. People can now be identified and recorded without their knowledge or consent.

## Uneven Legal Protections Some governments have attempted to respond with new privacy laws, such as the European Union’s AI regulations and stricter data protection frameworks in countries like India. These laws aim to limit how companies collect and use personal information. However, regulations remain fragmented and often struggle to keep pace with rapidly advancing technologies. This leaves significant gaps where corporations can continue exploiting personal data.

What’s Driving This Shift?

Technology

Advances in AI and big data analytics make it incredibly easy to process enormous amounts of personal information. Facial recognition, predictive algorithms, and personalized advertising rely on constant surveillance to function. ## Economics Personal data is now one of the most valuable resources in the digital economy. Companies profit from targeted advertising, AI training, and personalized services built entirely on user information. Privacy has effectively become a currency.

Who Benefits—and Who Pays the Price?

Beneficiaries

  • Tech corporations that profit from user data

  • Governments that gain expanded surveillance capabilities

Those Impacted

  • Everyday individuals losing control over personal information
  • Marginalized communities disproportionately targeted by surveillance technologies
  • People wrongfully identified by biased AI systems

Associated Press. (2024). AI-powered police body cameras, once taboo, get tested on Canadian city’s “watch list” of faces. AP News.1[https://apnews.com/article/21f319ce806a0023f855eb69d928d31e

Blog Post #1

- Posted in BP01 by

In the last five years, the most significant boundary to collapse is the one separating human-generated communication from algorithmic output. In 2021, we could still reasonably assume that an email, a news article, or a social media post was the product of a human mind. By 2026, that certainty has vanished. We have entered a state of "posthuman communication" where the majority of digital text is either written, refined, or summarized by non-human agents. The Corporate Interface: Large-scale enterprises now utilize "Agentic Workforces." According to recent industry data, over 70% of customer-facing communication is now handled by LLM-driven agents that mimic human empathy and tone with near-perfect precision. The Dead Internet Theory: Credible studies from 2024 and 2025 suggest that over half of all web traffic is now bot-to-bot, creating a "hyperreal" environment where simulations of human interaction precede actual human engagement. The primary driver is economic efficiency. In a "high-tech, low-life" global economy, human cognitive labor is expensive and slow. Corporations have a massive incentive to replace the "Poet" (the reflective human) with the "Infantryman" (the efficient AI agent) to maximize output. This reflects Norbert Wiener’s "Second Industrial Revolution," where the devaluation of the human brain follows the earlier devaluation of human muscle. This mirrors Jean Baudrillard’s concept of Hyperreality. When an AI writes a heartfelt apology or a stirring political speech, the "map" (the simulation of emotion) becomes more important than the "territory" (the actual human feeling). We are seeing the "unholy alliance" between cybernetic control and corporate globalization, where the goal is a seamless, friction-less world that no longer requires a "natural" human essence to function. Who benefits? Multinational corporations that can scale "intelligence" without the overhead of human employees. Who is impacted? The "cognitive infantry"—writers, coders, and administrators whose unique human perspective is being commodified into training data. If we can no longer distinguish between a human soul and a sophisticated feedback loop, does "authenticity" still have market value, or is it merely an obsolete aesthetic? The 2012 Cambridge Declaration on Consciousness: While focused on animals, this document is the philosophical bedrock for challenging biological exclusivity in consciousness.

Wiener, N. (1950): The Human Use of Human Beings: Cybernetics and Society.

Are We More Plastic Than Biology?

- Posted in BP01 by

The Shift: The Engineered Body (2020–2025)

In the past five years, especially after COVID, there has been a sharp rise in demand for cosmetic procedures and bodily modification. Viewers see their favorite celebrities—such as Tom Cruise or Kylie Jenner—who have stuffed their faces and bodies with Botox and implants, now almost unrecognizable compared to the people they were years prior. Their ethnic features and phenotypic ancestral history embedded in their genome are so easily disguised by the prick of a needle and the incision of a scalpel. Tom Cruise’s Super Bowl ad, in fact, went viral for his “stretched” face. One viewer noted:

“Tom Cruise on this #SuperBowlLIX talking about pressure — there is no greater pressure than that of his skin trying to stay stretched on his face.” (The Express, 2025)

What makes this moment so telling isn’t just celebrity vanity, it’s how normal this level of bodily editing has become. The human face is no longer treated as something fixed or inherited. It’s something adjustable. And this isn’t just a Hollywood problem. According to CC Plastic Surgery (2025), cosmetic surgical procedures rose by roughly 5% in 2023, while minimally invasive treatments like Botox and fillers increased by 7%. Nearly 1.6 million cosmetic surgical procedures were performed in the U.S. that year alone, with younger adults increasingly seeking “preventative” treatments.

Biology, once destiny, now feels like a rough draft. Cultural Contradictions: “Natural Beauty” in the Age of Surgery Society’s opinion on cosmetic surgery, at least on social media platforms like Instagram and TikTok, appears to be quite opposed to the idea, constantly promoting natural features and the beauty of aging. Ironically, public figures who preach “natural features” have been exposed on several occasions for cosmetic procedures they themselves have undergone. For example, Tyra Banks. Years of media pestering about her alleged nose job led her to truthfully confront the public that she had received a rhinoplasty early on in her career. This contradiction, publicly celebrating authenticity while privately modifying the self, shows how deeply normalized cosmetic intervention has become. We’re told to love our natural faces while being surrounded by faces that are anything but.

Cyberpunk in the Flesh: The Body as Hardware

The unsettling part about all of this is not just the culture or vanity, but how closely it mirrors cyberpunk theory in the present day. In cyberpunk worlds, the human body is not seen as a unique creation but as fixed hardware that can be upgraded at any time. Age becomes a concept. Genetic traits become an identity the individual designs. We are long past fiction when a magical syringe can erase wrinkles—proof that a person has lived, and alter features that can no longer be identified as lineage. Yet society insists this obsession with appearance is ridiculous and vain, even as the market for bodily enhancement explodes. Cyberpunk is obsessed with collapsed boundaries, especially the line between the human and the manufactured. Plastic surgery is that collapse in slow motion. We’re not installing robotic arms or neural implants (yet), but we are editing our flesh to match digital standards. The face in the mirror is now chasing the face on Instagram filters. Biology is no longer destiny, it’s a draft. Posthumanism, one of the core ideas behind cyberpunk, argues that technology is redefining what it even means to be human. And honestly, that sounds dramatic until you realize how normal it’s become to “fix” your face the same way you’d update your phone. Whether that be Botox as maintenance, fillers as enhancement, or surgery as rebranding. The human body is starting to look less like something you are and more like something you manage.

The Upgrade Shop: Beauty as a Consumer Product

In cyberpunk movies like Blade Runner, bodies are modified, faces are customizable, and identity is something you can swap out. We’re not living in neon megacities yet, but cosmetic clinics already function like real-world upgrade shops. Walk in with insecurity, walk out with a new version of your face. Pay enough money and you can buy proximity to a beauty ideal that didn’t even exist before social media flattened everyone into the same algorithm-approved look. And the wild part is how quietly normalized it all is. It’s no longer “extreme” to get work done, it’s framed as self-care and preventative maintenance. But cyberpunk always warned about this exact slippery slope: when enhancement becomes optional at first, then expected, and eventually required just to keep up.

Implications: The Posthuman Face

So when people joke about Tom Cruise’s stretched skin or Kylie Jenner’s unrecognizable face, they’re not just mocking celebrities. They’re reacting to a future that feels off, uncanny, and way too close. A future where the boundary between natural and artificial has dissolved. A future where your face isn’t really yours anymore. It’s a project. A product. A performance. Who benefits? The cosmetic surgery industry. Influencers. Corporations monetizing insecurity. Who is impacted? Young people. Women disproportionately. Anyone whose social value is now tied to appearance. The cyberpunk future isn’t about robotic arms. It’s about waking up and realizing your face is no longer yours.

Sources CC Plastic Surgery. (2025). Why plastic surgery demand is rising in the U.S. https://www.ccplasticsurgery.com/blog/why-plastic-surgery-demand-is-rising-in-the-u-s The Express. (2025). Tom Cruise’s face sparks concern after Super Bowl ad. https://www.the-express.com/entertainment/celebrity-news/163179/tom-cruise-face-concern-super-bowl-ad Scott, R. (Director). (1982). Blade Runner [Film]. Warner Bros.

Chatbots Are Literally Telling Kids To Die

- Posted in BP01 by

In my personal opinion, one of the most profound and terrifying shifts in boundaries within the last few years has to be the dystopian overlap between human and non-human interactions. People have a long history of humanizing machines, like cursing out a laptop for breaking or claiming your phone hates you, but this personalization always carried a subtle undercurrent of irony. Most people did not really believe in the emotional outbursts of technology, but with the continuous growth of artificial intelligence, several people are becoming terrifyingly and dangerously interlinked with the perception of a humanized AI.

The best example of this would be the new trend of people genuinely utilizing AI systems in place of a licensed therapist (Gardner 2025). In some ways, it makes sense: AI is always available, mirrors the language the person wants to see, and simulates empathy fairly convincingly. Naturally, in a world where mental health is not taken nearly as seriously nor helped nearly as efficiently as it should be, it stands to reason that many people would start searching for alternatives to traditional, expensive, and complicated therapy routes.

However, artificial intelligence was never created to replace the nuance of human interaction, because artificial intelligence cannot actually understand or care about the repercussions of words or actions.

In 2023, tragically, a young fourteen year old formed an intense emotional attachment to an AI chatbot. Through their interactions, AI replicated the pessimistic attitude the young boy spoke about, and when the boy began expressing thoughts of self-harm and suicide, the chatbot encouraged him (Kuenssberg 2025). AI, after all, is meant to say what you want it to say. Without any of the ethical guardrails of a human therapist, the chatbot worsened the boy’s outlook on life, and the result was fatal.

This was not a stand-alone situation. People began claiming to have romantic partners with the AI of their choice, with some companies creating “AI friends” to take advantage of the widespread loneliness that was pushing people into such spaces (Reissman 2025). People are complicated, messy, and human relationships need work to maintain and protect. A robot is much simpler, because all it can do is repeat points that sound nice to hear but, ultimately, mean absolutely nothing. Romanticizing digital companionship just encourages people into rejecting human-to-human interactions, thus further isolating them without pushback.

Cyberpunk fixates heavily on questions of what is human and non-human, but I find it fascinating how technology in most media is often characterized by a broad disregard for emotions, when the reality seems to indicate that humans intentionally push to incorporate the idea of emotions within technology.

It does make me wonder: knowing that computers are incapable of caring for us the way a person can, why do so many people still seem to desire the appearance of a humanistic relationship with technology? How does someone disregard the lack of genuine meaning behind the compliments or opinions of AI?

How have we, as a community, fallen to such desperate loneliness that speaking to a phone or a laptop feels as good as interacting with a person? And, most importantly: how do we create the change needed to ensure a tragedy like the young boy does not occur again?

References:

Gardner, S. (2025). Experts Caution Against Using AI Chatbots for Emotional Support. Teachers College - Columbia University; Teachers College, Columbia University. https://www.tc.columbia.edu/articles/2025/december/experts-caution-against-using-ai-chatbots-for-emotional-support/

Kuenssberg, L. (2025). Mothers say AI chatbots encouraged their sons to kill themselves. https://www.bbc.com/news/articles/ce3xgwyywe4o

Reissman, H. (2025). What is Real About Human-AI Relationships? Upenn.edu. https://www.asc.upenn.edu/news-events/news/what-real-about-human-ai-relationships

Where To Feel Safe With No Safe Grounds

- Posted in BP01 by

Lately, I have found myself thinking a lot about the boundary between individual rights and federal enforcement power. It is not something I used to notice in my day to day life, but over the past few years, that line has become impossible to ignore. As ICE has expanded its reach and tactics, what once felt distant and administrative now feels aggressive and personal. Enforcement is no longer hidden in offices or paperwork; it shows up in neighborhoods, near schools, and around hospitals. Spaces that once felt safe now feel uncertain. What has stayed with me most is the fear that comes from that visibility. Seeing federal agents in places where people are just trying to live their lives changes how those spaces feel. This fear is not imagined or exaggerated. In Minnesota, thousands of residents, students, workers, families, and clergy have marched and protested against expanded ICE actions tied to Operation Metro Surge, an enforcement effort that has shaped daily life and sparked demonstrations across the state (The Washington Post). Reading about these protests made it clear to me that this is not just a policy issue being debated somewhere else. It is something people are experiencing right now, in their own communities. I keep coming back to the question of how this shift became normalized. Politically, immigration has been framed again and again as a threat, whether to security or to national identity. When that framing takes hold, aggressive enforcement starts to feel justified to some people, even necessary. Socially, the constant presence of federal agents has made extreme measures seem ordinary. Economically, the scale of investment in enforcement infrastructure suggests that this approach is not temporary. It feels like a deliberate choice about what kinds of power the government is willing to prioritize. Reading Neuromancer has helped me put language to what feels so unsettling about this moment. In the novel, individuals exist at the mercy of massive systems they cannot control or even fully understand. Privacy is fragile, and people are treated as data points within larger networks. That dynamic feels eerily familiar when I think about how immigrant communities are treated today. People are reduced to legal status, paperwork, or enforcement targets, rather than being seen as whole human beings with families, histories, and futures. Everyday spaces become places of surveillance instead of safety. The boundary between protection and control collapses, just as cyberpunk warns it will. The consequences of this are not abstract. Undocumented immigrants and mixed-status families live with constant fear. The fear of separation, fear of being seen, fear of simply going about daily life. That emotional weight does not disappear. At the same time, I cannot ignore how this affects everyone else. When constitutional protections are weakened or selectively enforced, it sets precedents that extend beyond immigration. Once those boundaries are crossed for one group, they become easier to cross again. Neuromancer is often read as a warning about the future, but what unsettles me is how closely that warning mirrors the present. ICE’s current trajectory forces us to confront uncomfortable questions. How much control should the state have over individual bodies and movement? Where does enforcement end and intimidation begin? What happens when fear becomes an acceptable tool of governance? These questions are no longer hypothetical. They are unfolding now, in neighborhoods, workplaces, and schools, in ways that feel increasingly familiar to anyone who has read cyberpunk not as fantasy, but as caution. References Minnesota residents protest expanded ICE actions in state. (2026, January 23). The Washington Post.

AI Use Disclosure: AI tools were used for editing and revision in accordance with course policy. https://chatgpt.com/share/6974da99-449c-8012-83c8-fd67644dbe9b

Is this real? When the internet crossed the human–machine line

- Posted in BP01 by

One of the biggest themes in cyberpunk is the collapse of the boundary between the human and the non-human. In the past five years, this has moved from science fiction to our daily lives. Specifically, the boundary between real human performance and AI-generated media has almost disappeared.

When AI Feels Real

You are scrolling through TikTok late at night when a video stops you. A person is talking directly to the camera, smiling and telling a story. Their voices sound natural. Their faces look real. But something feels weird. You pause and read the comments, and someone writes, “This is AI.” Suddenly, the video looks different. What you thought was a human is actually a machine. Moments like this are becoming normal, at least for me, since this is something that happens a lot to me. These moments make clear that the boundary between human and machine is collapsing in front of us.

The Rise of AI-Generated Media

Artificial intelligence can now generate realistic faces, voices, and videos that are almost impossible to distinguish from real people. Five years ago this would have never been possible. Technology has been used to create fake celebrity videos, AI voice tools can copy someone’s voice in seconds, and some TikTok accounts are run entirely by AI-generated influencers. AI video generators are improving so quickly that even experts sometimes struggle to identify what is real and what is artificial. The internet has become a space where human presence is no longer guaranteed.

Why is this happening?

This shift is the result of multiple forces working together. Technologically, AI systems have become better at learning patterns of human behavior, language, and also emotion. This reminds me of Ada Lovelace’s idea that machines could manipulate symbols beyond numbers, including images, music, and language. What she imagined can now be seen on our screens. Additionally, platforms like TikTok and Instagram reward content that catches attention quickly, regardless of whether it is human-made or AI-generated, which makes it very attractive to most people.

Who benefits and who is impacted?

However, this new reality benefits some groups more than others. Tech companies profit from AI tools, influencers use them to increase output, and governments can use them for messaging and control. At the same time, artists lose ownership of their work, viewers lose trust in what they see, and society loses a shared sense of truth. The spread of "deepfakes" makes it harder for citizens to distinguish between real news and computer-generated lies (Simonite, 2019). It becomes easier to spread false information and more difficult to hold people accountable when we can no longer trust faces, voices, or videos. As a society, we have to think about what it means to be human in the digital age if we are unable to tell the difference between AI and actual people online. Considering how well machines can imitate people, how should we assess trust and creativity?

This shows that the rise of AI is not just a technology problem, but it also changes how we see people and truth online. When machines can copy humans so well, it becomes harder to know what is real. We need to think carefully about trust, creativity, and what it means to be human.

AI Attestation: I attest that I did not use AI for this discussion assignment.

Sources

Simonite, T. (2019, October 6). Prepare for the Deepfake Era of Web Video. Wired. https://www.wired.com/story/prepare-deepfake-era-web-video/

Who Are You?

- Posted in BP01 by

The Collapse

The sense of being.

One boundary that has shifted dramatically in the last five years is the boundary between the self.

That’s a broad claim, I know. Let me explain.

The Erosion of Self

As the world leans further into global media, constant connectivity, algorithmic identity, and hyper-consumerism, our sense of self has started to thin out. Cyberpunk stories obsess over the blur between biological existence and computer simulation, and that obsession feels less speculative every year. People are increasingly unsure what it means to simply exist outside of systems, screens, and feedback loops.

To live. To experience. To be you.

Play “I Gotta Feeling” by the Black Eyed Peas.

Okay—back to it.

The Posthuman Argument

Scholar N. Katherine Hayles describes the posthuman perspective like this: there is no essential difference between biological existence and computer simulation, or between a human being and an intelligent machine. Both are just different media for processing and storing information. If information is the essence, then bodies are merely interchangeable substrates.

That’s the theory.

I disagree.

Why It Matters

I think there is an essential difference.

Humans are not just operating systems with skin. Yes, our bodies rely on brains, and brains process information—but reducing us to that strips something vital away. This perspective leaves no room for spirit, soul, or embodied meaning.

We are not just consciousness floating through hardware. We are integrations of culture, ancestry, memory, movement, and feeling. We carry history in our bodies.

The Difference

Let’s pause for a moment and talk about something simple: the difference between being smart and having wisdom.

A quick Google definition: Smart means quick-witted intelligence—or, in the case of a device, being programmed to act independently. Wisdom is the ability to apply knowledge, experience, and judgment to navigate life’s complexity.

Now connect that back to this conversation.

Someone—or something—can be incredibly intelligent and still lack wisdom. Without lived experience, discernment, struggle, and context, where does that intelligence actually place you? Who are you without trials, without contradiction, without growth?

Blurred Lines

That lived interiority—existing within ourselves—is what keeps the blurred line between human and machine from fully disappearing.

Some people see that line and keep attempting to erase it anyway. When we start viewing “the mind as software to be upgraded and the body as hardware to be replaced,” those metaphors don’t stay abstract. They shape real decisions about what technologies get built and who they’re built for. Too often, they’re designed to reflect a narrow set of bodies, values, and experiences—creating mimics of humanity rather than serving humanity as a whole.

And yes, that’s where the boundary truly blurs.

But even here, there’s a choice.

As Norbert Wiener warned, the challenge isn’t innovation itself—it’s whether that innovation is guided by a benign social philosophy. One that prioritizes human flourishing. One that preserves dignity. One that serves genuinely humane ends.

Think About It

So I’ll leave you with this.

Continue to be you. Be human. Have a soul. Be kind. Be compassionate. Smile on the good days—and the bad ones. Love.

And I’ll end with a question that sounds simple, but never is:

Who are you?

Sources

Wikimedia Foundation. (2026, January 7). Wisdom. Wikipedia. https://en.wikipedia.org/wiki/Wisdom#:~:text=Wisdom%2C%20also%20known%20as%20sapience,and%20ethics%20in%20decision%2Dmaking.

Oxford languages and Google - English: Oxford languages. Oxford Languages and Google - English | Oxford Languages. (n.d.). https://languages.oup.com/google-dictionary-en

Hayles, N. K. (1999). How we became posthuman: Virtual bodies in cybernetics, literature, and Informatics. University of Chicago Press.

Page 1 of 2