BP01

Blog Post #1: I’ve Seen Things You People Wouldn’t Believe

A central theme in cyberpunk is the collapse of established boundaries—whether political borders, the human/non-human divide, or categories of identity. These fictional boundary collapses mirror real shifts happening today.

Identify one specific boundary that has shifted significantly in the past five years. Describe what has changed with concrete examples and credible sources. Then analyze what's driving this shift (technology, economics, social movements, politics, culture). Connect your analysis to course themes like posthumanism, globalization, or technological disruption. Consider the implications: who benefits, who's impacted, and what questions does this raise?

Have You Ever Heard Your Mum Being Kidnapped?

- Posted in BP01 by

Have you ever heard your mum being kidnapped or trapped somewhere? Because the number of people have increased without the number of kidnappings increasing. The New Yorker (Bethea, 2024) has published an article in 2024 writing about different people falling victim to their loved one’s voice calling, in need of money. Due to the recent developments of AI we are now able to use people’s voices without them having to actually say anything. There is definitely positive aspects of it regarding the capture of someone’s voice when they for example lose it in terms of medical conditions but the sad thing is that the negative consequences seem to take over. Scam calls are not a thing that has been new in the last years. Robotic voices asking for money or people trying to trick you to subscribe something over the phone has been an issue over meany years but it was more simple to see through a scam. Now, scams have gotten much more personal and emotional through the use of loved one’s voices. The development of aid technologies like Siri and Alexa as well as now AI like ChatGPT is what has caused this problem. We as humans love technological advancement and anything that seems to help us in our daily lives and can make it more efficient we consider good but now this technology has grown into dangerous areas. The couple Robin and Steve are one such couple affected by these scam calls. As described in “The New Yorker” Robin has been called by her mother in law, Mona, in the middle of the night, who was giving her the feeling she was in trouble, until a man took over the call saying he was going to kill her if the couple didn’t send the total amount of $750. Although the couple was weirded out by this small amount they still proceeded the payment until the man hung up and they managed to reach Mona who was not sure what they were talking about. They had been scammed. Another example The New Yorker gives is that of a mother getting a phone call of her daughter who was supposedly on a ski trip, saying that she messed up, with a man in the background who threatens to kill her by pumping drugs in her stomach. The details of the phone call must have been terrifying and disgusting which is also why the mother who got this phone call started cussing at the phone caller for putting these kinds of images into her head, before hanging up, once she found out it was a scam.
The problem of this, however, stretches way further than blackmailing and scam calls. Politicians like Biden’s voice have been used to call voters, telling them to not vote him which is a shift in a political direction that could end up with democratic elections being less credible and authentic (Bethea, 2024). This kind of human to nonhuman boundary has been lifted off way more than ever before causing the loss of understanding what is real and what is not. It affects us as humans emotionally, financially, and politically and these technologies start growing on their own now. Once a voice has been recorded, it can be used by AI. And we are still only at the beginning of this process and don’t yet know how much it will affect of our lives in the future.

Bethea, C. (2024, March 7). The Terrifying A.I. Scam That Uses Your Loved One’s Voice. The New Yorker. https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice ‌

Where To Feel Safe With No Safe Grounds

- Posted in BP01 by

Lately, I have found myself thinking a lot about the boundary between individual rights and federal enforcement power. It is not something I used to notice in my day to day life, but over the past few years, that line has become impossible to ignore. As ICE has expanded its reach and tactics, what once felt distant and administrative now feels aggressive and personal. Enforcement is no longer hidden in offices or paperwork; it shows up in neighborhoods, near schools, and around hospitals. Spaces that once felt safe now feel uncertain. What has stayed with me most is the fear that comes from that visibility. Seeing federal agents in places where people are just trying to live their lives changes how those spaces feel. This fear is not imagined or exaggerated. In Minnesota, thousands of residents, students, workers, families, and clergy have marched and protested against expanded ICE actions tied to Operation Metro Surge, an enforcement effort that has shaped daily life and sparked demonstrations across the state (The Washington Post). Reading about these protests made it clear to me that this is not just a policy issue being debated somewhere else. It is something people are experiencing right now, in their own communities. I keep coming back to the question of how this shift became normalized. Politically, immigration has been framed again and again as a threat, whether to security or to national identity. When that framing takes hold, aggressive enforcement starts to feel justified to some people, even necessary. Socially, the constant presence of federal agents has made extreme measures seem ordinary. Economically, the scale of investment in enforcement infrastructure suggests that this approach is not temporary. It feels like a deliberate choice about what kinds of power the government is willing to prioritize. Reading Neuromancer has helped me put language to what feels so unsettling about this moment. In the novel, individuals exist at the mercy of massive systems they cannot control or even fully understand. Privacy is fragile, and people are treated as data points within larger networks. That dynamic feels eerily familiar when I think about how immigrant communities are treated today. People are reduced to legal status, paperwork, or enforcement targets, rather than being seen as whole human beings with families, histories, and futures. Everyday spaces become places of surveillance instead of safety. The boundary between protection and control collapses, just as cyberpunk warns it will. The consequences of this are not abstract. Undocumented immigrants and mixed-status families live with constant fear. The fear of separation, fear of being seen, fear of simply going about daily life. That emotional weight does not disappear. At the same time, I cannot ignore how this affects everyone else. When constitutional protections are weakened or selectively enforced, it sets precedents that extend beyond immigration. Once those boundaries are crossed for one group, they become easier to cross again. Neuromancer is often read as a warning about the future, but what unsettles me is how closely that warning mirrors the present. ICE’s current trajectory forces us to confront uncomfortable questions. How much control should the state have over individual bodies and movement? Where does enforcement end and intimidation begin? What happens when fear becomes an acceptable tool of governance? These questions are no longer hypothetical. They are unfolding now, in neighborhoods, workplaces, and schools, in ways that feel increasingly familiar to anyone who has read cyberpunk not as fantasy, but as caution. References Minnesota residents protest expanded ICE actions in state. (2026, January 23). The Washington Post.

AI Use Disclosure: AI tools were used for editing and revision in accordance with course policy. https://chatgpt.com/share/6974da99-449c-8012-83c8-fd67644dbe9b

Is this real? When the internet crossed the human–machine line

- Posted in BP01 by

One of the biggest themes in cyberpunk is the collapse of the boundary between the human and the non-human. In the past five years, this has moved from science fiction to our daily lives. Specifically, the boundary between real human performance and AI-generated media has almost disappeared.

When AI Feels Real

You are scrolling through TikTok late at night when a video stops you. A person is talking directly to the camera, smiling and telling a story. Their voices sound natural. Their faces look real. But something feels weird. You pause and read the comments, and someone writes, “This is AI.” Suddenly, the video looks different. What you thought was a human is actually a machine. Moments like this are becoming normal, at least for me, since this is something that happens a lot to me. These moments make clear that the boundary between human and machine is collapsing in front of us.

The Rise of AI-Generated Media

Artificial intelligence can now generate realistic faces, voices, and videos that are almost impossible to distinguish from real people. Five years ago this would have never been possible. Technology has been used to create fake celebrity videos, AI voice tools can copy someone’s voice in seconds, and some TikTok accounts are run entirely by AI-generated influencers. AI video generators are improving so quickly that even experts sometimes struggle to identify what is real and what is artificial. The internet has become a space where human presence is no longer guaranteed.

Why is this happening?

This shift is the result of multiple forces working together. Technologically, AI systems have become better at learning patterns of human behavior, language, and also emotion. This reminds me of Ada Lovelace’s idea that machines could manipulate symbols beyond numbers, including images, music, and language. What she imagined can now be seen on our screens. Additionally, platforms like TikTok and Instagram reward content that catches attention quickly, regardless of whether it is human-made or AI-generated, which makes it very attractive to most people.

Who benefits and who is impacted?

However, this new reality benefits some groups more than others. Tech companies profit from AI tools, influencers use them to increase output, and governments can use them for messaging and control. At the same time, artists lose ownership of their work, viewers lose trust in what they see, and society loses a shared sense of truth. The spread of "deepfakes" makes it harder for citizens to distinguish between real news and computer-generated lies (Simonite, 2019). It becomes easier to spread false information and more difficult to hold people accountable when we can no longer trust faces, voices, or videos. As a society, we have to think about what it means to be human in the digital age if we are unable to tell the difference between AI and actual people online. Considering how well machines can imitate people, how should we assess trust and creativity?

This shows that the rise of AI is not just a technology problem, but it also changes how we see people and truth online. When machines can copy humans so well, it becomes harder to know what is real. We need to think carefully about trust, creativity, and what it means to be human.

AI Attestation: I attest that I did not use AI for this discussion assignment.

Sources

Simonite, T. (2019, October 6). Prepare for the Deepfake Era of Web Video. Wired. https://www.wired.com/story/prepare-deepfake-era-web-video/

Who Are You?

- Posted in BP01 by

The Collapse

The sense of being.

One boundary that has shifted dramatically in the last five years is the boundary between the self.

That’s a broad claim, I know. Let me explain.

The Erosion of Self

As the world leans further into global media, constant connectivity, algorithmic identity, and hyper-consumerism, our sense of self has started to thin out. Cyberpunk stories obsess over the blur between biological existence and computer simulation, and that obsession feels less speculative every year. People are increasingly unsure what it means to simply exist outside of systems, screens, and feedback loops.

To live. To experience. To be you.

Play “I Gotta Feeling” by the Black Eyed Peas.

Okay—back to it.

The Posthuman Argument

Scholar N. Katherine Hayles describes the posthuman perspective like this: there is no essential difference between biological existence and computer simulation, or between a human being and an intelligent machine. Both are just different media for processing and storing information. If information is the essence, then bodies are merely interchangeable substrates.

That’s the theory.

I disagree.

Why It Matters

I think there is an essential difference.

Humans are not just operating systems with skin. Yes, our bodies rely on brains, and brains process information—but reducing us to that strips something vital away. This perspective leaves no room for spirit, soul, or embodied meaning.

We are not just consciousness floating through hardware. We are integrations of culture, ancestry, memory, movement, and feeling. We carry history in our bodies.

The Difference

Let’s pause for a moment and talk about something simple: the difference between being smart and having wisdom.

A quick Google definition: Smart means quick-witted intelligence—or, in the case of a device, being programmed to act independently. Wisdom is the ability to apply knowledge, experience, and judgment to navigate life’s complexity.

Now connect that back to this conversation.

Someone—or something—can be incredibly intelligent and still lack wisdom. Without lived experience, discernment, struggle, and context, where does that intelligence actually place you? Who are you without trials, without contradiction, without growth?

Blurred Lines

That lived interiority—existing within ourselves—is what keeps the blurred line between human and machine from fully disappearing.

Some people see that line and keep attempting to erase it anyway. When we start viewing “the mind as software to be upgraded and the body as hardware to be replaced,” those metaphors don’t stay abstract. They shape real decisions about what technologies get built and who they’re built for. Too often, they’re designed to reflect a narrow set of bodies, values, and experiences—creating mimics of humanity rather than serving humanity as a whole.

And yes, that’s where the boundary truly blurs.

But even here, there’s a choice.

As Norbert Wiener warned, the challenge isn’t innovation itself—it’s whether that innovation is guided by a benign social philosophy. One that prioritizes human flourishing. One that preserves dignity. One that serves genuinely humane ends.

Think About It

So I’ll leave you with this.

Continue to be you. Be human. Have a soul. Be kind. Be compassionate. Smile on the good days—and the bad ones. Love.

And I’ll end with a question that sounds simple, but never is:

Who are you?

Sources

Wikimedia Foundation. (2026, January 7). Wisdom. Wikipedia. https://en.wikipedia.org/wiki/Wisdom#:~:text=Wisdom%2C%20also%20known%20as%20sapience,and%20ethics%20in%20decision%2Dmaking.

Oxford languages and Google - English: Oxford languages. Oxford Languages and Google - English | Oxford Languages. (n.d.). https://languages.oup.com/google-dictionary-en

Hayles, N. K. (1999). How we became posthuman: Virtual bodies in cybernetics, literature, and Informatics. University of Chicago Press.

Humanity and AI: The Blurring Line

- Posted in BP01 by

enter image description here Intelligence seemed to be exclusively human for most of human history. While machines could compute, store data, and obey commands, thinking and creativity were thought to be exclusively human qualities. That barrier doesn't feel steady anymore. These days, artificial intelligence can write, create graphics, help with diagnosis, and have human-like conversations. The distinction between human and machine intellect, which we formerly took for granted, is what has changed, not simply technology.

The Unevenly Arrival

Cyberpunk has long highlighted instances in which cutting-edge technology interferes with daily life. The genre depicts how the future arrives unevenly, smashing into the present and altering societal institutions, rather than envisioning a far-off future. This identical cyberpunk trend is reflected in the development of AI. It symbolizes the blurring of lines between creation and automation, person and machine, and mind and system. An example of this is shown in the film entitled "Blade Runner: The Final Cut."

Artificial Intelligence in Everyday Life

This change may be seen in many aspects of daily life. AI technologies help with research, editing, and brainstorming in the classroom. Algorithms are used in the workplace to track productivity and filter job applications. AI systems are employed in healthcare to support diagnostic decisions and preserve patient data. Creative industries are also changing as AI-generated music, literature, and visuals become increasingly competitive with human-made work. Instead of being limited to humans, intelligence is now deeply embedded in global technology networks.

Power and Inequality

Technology often refers to this situation as "high-tech, low-life," which is defined in the novel entitled “Neuromancer” as cyberpunk. This is where state-of-the-art technology coexists with inequality and insecurity. AI fits this pattern well. While speed and efficiency benefit companies and organizations, many workers risk losing their jobs, being observed, or having their skills diminished. Due to the fact that these systems are often owned and controlled by a few large companies, there are questions about who benefits most from this technological shift and who bears the risks.

Posthumanism

Furthermore, the posthumanism ideas discussed in class are related to this boundary collapse. Posthumanism argues that human-machine interactions change identity and cognition, challenging the notion that humans and technology are separate. When AI assists with writing, reasoning, and decision-making, intelligence becomes shared rather than exclusive to humans. Cyberpunk often depicts the body as an interface, but AI now functions as a cognitive interface, altering our mental processes without physically merging with us.

Risks and Bias

There are big risks that come with these changes. AI systems can spread false information, copy bias, and create what some people call "bullshit at scale," which means outputs that are confident but don't make sense. These problems get worse with globalization because AI models are trained on huge amounts of data from people all over the world, sometimes without their knowledge or consent. Cyberpunk's worry about unbridled corporate power and lax accountability is echoed by the fact that decisions made by a tiny number of firms may have an influence on workers, schools, and cultures worldwide.

Cyberpunk Warning

In cyberpunk, straightforward solutions are uncommon, and this uncertainty is reflected in the rise of AI. While technology is undeniably remarkable, it also challenges long-held notions of responsibility, intelligence, and creativity. When Roy Batty says, "I've seen things you people wouldn't believe," he is speaking from a world where boundaries have already collapsed. That sentence now seems less like fiction and more like a warning. Rather than whether technology will alter what it means to think, the question at hand is whether humans will still oversee the use of AI.

Citations:

Gibson, W. (1984). Neuromancer. Ace Books. Scott, R. (Director). (1982). Blade Runner: The Final Cut [Film]. Warner Home Video. Swank DigitalCampus.https://digitalcampus.swankmp.net/xula393246/watch/C9BD78E96D3A71E0

ChatGPT was used to generate the image used in this blog.

Technology, Justice, and Promise →

- Posted in BP01 by

Technology has become more embedded in the modern criminal justice system, often introduced with a promise of reform, transparency, and efficiency. From face recognition, policing software, to risk assessment algorithms that persuade parole decisions, released dates, and jail time. The key question is not whether technology continues to influence the criminal justice system, but whether it will do so to promote fairness and accountability.

Advocates for these tools frequently frame them as fair alternatives to human prejudice. After all, parole boards, law enforcement, and judges are imperfect. However, algorithms are viewed as data-driven systems that analyze a vast amount of data more reliably than a human being. For example, risk assessment tools are in place to predict the probability of a defendant reoffending, giving judges the ability to make more accurate decisions on early release and sentencing.

Additionaly technologies are not created in a social vacuum. Algorithms are trained on historical data that contains racially discriminatory policing, policies, and prosecution practices. These preexisting disparities can be reflected when encoding into new software. Research was conducted by ProPublica (2016), which found that risk assessment tools unfairly labeled Black defendants as more likely to reoffend in comparison to white defendants. This illustrates how technology can conceal bias rather than eradicate it.

Similar concerns arise about predictive policing systems. These systems predict crime hotspots, pushing law enforcement to be proactive in these areas. Though this could be efficient, many argue that this causes a feedback loop, whereas overpolicing neighborhoods gives more data.

However, elimanting tenchnology completly would be wrong. Digital tools can support reforms and accountability. For example, body-worn cameras were enforced in 2014-2016 after the killing of Mike Brown. This tactic increased transparency and fairness, providing valuable evidence and deterring misconduct. Therefore, governance is more important than invention.

Furthermore, philosophical conflicts on punishment and accountability are also reflected in discussions around technology and the criminal justice system. Should efficiency take precedence over personal consideration in the legal systems? When liberty is at risk, is predictive accuracy a suitable objective? These issues don't call for a code answer, but more reflection and deliberation.

In the end, in modern society, technology is neither a cure-all nor inherently negative. Its a powerful tool that is shaped by society. Whether it has a positive or negative impact its relies on frameworks that govern its use. For a society committed to this innovation, its main focus should be on using technology to serve the purpose of fairness, dignity, and transparency.

Blog Post #1.When Technology judges the Game/Soccer

- Posted in BP01 by

In the past five years, one clear boundary that has changed a lot is who makes decisions in soccer. With the use of VAR (Video Assistant Referee), the boundary between human judgment and technological judgment has become unclear.

Before VAR, referees made decisions only with their own eyes and experience. Mistakes were part of the game. Fans accepted that referees are human. Today, VAR uses cameras, slow motion, and digital lines to review goals, penalties, offsides, and red cards. In many situations, the referee no longer has the final word alone. Technology now helps or sometimes corrects the referee.VAR has been used more widely since around 2018, but in the last five years it has become normal in major leagues like the Premier League, La Liga, Serie A, and international tournaments such as the World Cup. According to FIFA, VAR was created to reduce clear and obvious errors in important moments of the game. This shows a big change in how fairness is defined in soccer. enter image description here

This boundary shift is driven mainly by technology and globalization. Soccer is now a global business. Matches are watched by millions of people around the world. Every mistake is shared on social media in seconds. Because of this pressure, leagues want more “objective” decisions. Technology promises accuracy and fairness, even if it slows the game.

This change connects clearly to cyberpunk themes. In cyberpunk stories, technology is often used to control systems and reduce human error, but it also creates new problems. VAR was created to make soccer more fair, but many fans feel it takes away emotion and spontaneity. Goals are celebrated, then canceled. Players wait while machines check lines that are invisible to the human eye. The game feels less human.VAR also connects to posthumanism, which questions where human control ends and machine control begins. When a computer draws offside lines and decides if a player’s toe is ahead, is that still human judgment? Or is the machine now the authority? Referees often say they must follow VAR, even if their original decision felt right.
enter image description here

As someone who watches a lot of soccer games, I experience this boundary shift very personally. Many times, I celebrate a goal, and a few seconds later the game stops because VAR is checking the play. Sometimes the technology decides the goal is offside, even when it looked fine in real time. I understand that soccer still has a lot of human control. The referee is human, and there are also humans working in the VAR room. However, the offside lines, the slow-motion replays, and the final images all come from technology. These tools strongly influence the referee’s decision and often take a long time. Even though this can be frustrating, soccer is the game I love. In the end, VAR shows how a cyberpunk-style boundary collapse is happening in real life. Soccer is still played by humans, but it is now judged with the help of machines. The question is not only whether VAR is good or bad, but how much control we are willing to give to technology. Like in cyberpunk stories, once machines enter the system, the game is never the same.

  • Sources

https://www.fifa.com/en/watch/ws_FR5wijEqqZgJJbN3k2g
https://www.bbc.com/sport/football/articles/czdq0m2z0emo
https://www.bbc.com/sport/football/articles/c7v0lz7q7q2o
https://www.bbc.com/sport/football/articles/cvgrx8ml7m0o AI: ChatGPT was used to assist with translation and organizing ideas. The content and ideas are entirely the author’s.

Hear no evil, Speak no evil, SEE all evil

- Posted in BP01 by

In a cyberpunk future the highly technological future leaves little room for personal privacy. With the ability for memories to be downloaded onto a hard drive, conversations to be recorded at all times, and surveillance systems wherever you turn, what does privacy even truly mean? Sadly, we are not as far away from this future as it may seem as recording technology advances every day. What started out as a way to preserve memories and document history, has morphed into a way to surveille and invade the privacy of strangers on the street. One of the most significant boundary collapses in the last five years has nothing to do with changes in climate or the breaking (and building) of literal borders, but rather entirely relates to the erase of privacy in the digital and technological age.

What Changed?

In previous years, taking a photograph was something personal and even private. By taking a picture, you were inviting others in to indulge in a day in your life. A hot coffee from your local coffee shop, you blowing out the candles to your fifth birthday cake, or even a picture of you and your friends from prom was taken for you personally to share with others if you saw fit. People were able to access a portion of your life with your consent, and it was clear that taking pictures of-or recording others without their consent was unethical and, honestly, creepy. When social media became more popular however, and websites like WorldStar encouraged people to record moments between stranger these boundaries began to bend. Enraptured by the dopamine rush of likes, views, and comments, a race to be the biggest name, have the funniest video, and/or be the most known began. No longer was your day-to-day life something kept between you and a group of friends, now moments of your life were able to be recorded and posted without your knowing or consent. The lines of privacy blurred even more when streaming became popularized. At this point it was not only normal to be constantly under surveillance but almost expected as streamers conducted twenty-four hour live-streams giving fans constant access to their daily happenings. One of the most recent, and in my opinion most stark boundary shifts came in the form of the Ray Bans Meta Glasses. These glasses allow for its wearer to record from their eye view, most of the time without the knowledge of those around them. It has also been found that many non-users of said glasses fear privacy breaches from those who own the glasses while glasses owners feel as though they get a social boost from the technological advancement (Anzolin & Nostro 2025).

Another occurrence that aided in this shift is the rise of police brutality. During instances where no one else was around, footage was the only thing that many could use to prove their innocence. Not only did the rise in police brutality aid in a subsequential rise in citizen’s journalism, but it made having a phone or recording device on you at all times almost essential. Moments that would have gone unknown and undiscussed were now available on platforms for people around the world to see. Eventually, recordings from people on the street became people’s main source of news when media stations were not reporting on what was truly happening (Yeh 2020). Because of this, people became prepared to record a strangers’ possible worst moment at the drop of a hat whether it was for safety or entertainment.

The Integration

Thus far, we understand that cyberpunk societies are marked by highly advanced technologies and weak governments. The advance in technology that has contributed to the erasure of privacy in the modern day is obvious, what I instead want to discuss is how weak government further pushes us towards a cyberpunk future. As with the last example about the rise in police brutality, the immense racism that our government was built upon and has yet to make up for pushed citizens to feel as though a camera phone was a tool of protection. With the threat of aggressions from police officers becoming increasingly more imminent for marginalized communities, technology can feel like the only thing that may be able to save your life. This is not only true for those of marginalized identities anymore as we see those who do not proudly support the government at risk for experiencing these aggressions as well. Lack of government protection, or reprimand for the perpetrators of harm actively pushes us closer to the cyberpunk future we deem unrealistic.

No AI Technology was used to create this blog post.

References

Anzolin, E., & Nostro, G. L. (2025, December 9). Focus: Ray-Ban Meta glasses take off but face privacy and competition test. Reuters. https://www.reuters.com/sustainability/boards-policy-regulation/ray-ban-meta-glasses-take-off-face-privacy-competition-test-2025-12-09/

Yeh, J. (2020, August 5). “I’m out here—I am the news for our people.” How protesters across the country are keeping informed. Columbia Journalism Review. https://www.cjr.org/united_states_project/protest-activist-news-social-media.php

The Death of the "Real"

- Posted in BP01 by

In the neon-soaked sprawl of William Gibson’s Neuromancer, the line between the organic and the digital is a porous membrane, constantly punctured by neural jacks and construct personalities. We often treat cyberpunk as a warning of a distant, dystopian future. We are wrong. It is a diagnosis of our present. The most significant boundary collapse of the last five years is not a geopolitical border dissolving, but the erosion of the human/synthetic divide—specifically, the collapse of human exclusivity in creativity and truth.

The Shift: The Synthetic Takeover (2022–2025)

For centuries, "creation" was the final fortress of humanity. Machines could weave cloth or assemble cars, but they could not dream. That boundary evaporated in August 2022, when Jason M. Allen’s Théâtre D’opéra Spatial took first place at the Colorado State Fair. It wasn't painted by a brush; it was hallucinated by Midjourney. The controversy that followed—artists crying foul, the U.S. Copyright Office refusing protection because "human authorship" was absent—marked the moment the definition of "artist" fractured.

Since then, the breach has widened into a chasm:

The "Dead Internet" is Here: As of 2024, reports indicate that automated bots and AI agents now generate a massive plurality of web traffic. We are increasingly screaming into a void populated by echoes of ourselves.

The Liar’s Dividend: The explosion of deepfakes (rising 244% in 2024 according to Entrust) has created an epistemological crisis. We have moved beyond "fake news" to "fake reality." When a CEO’s voice can be cloned to authorize million-dollar transfers, or a political candidate’s face grafted onto incriminating footage, the boundary between evidence and fabrication is gone.

The Drivers: Why Now?

This collapse wasn't an accident; it was engineered by the convergence of technological capability and surveillance capitalism.

Technology: The Transformer architecture (the "T" in GPT) allowed machines to stop acting like calculators and start acting like pattern-completers, digesting the sum total of human expression to mimic our "soul."

Economics: This is classic "High Tech, Low Life." The driving force is the ruthless efficiency of the market. Why pay a graphic designer, a copywriter, or an influencer when an algorithm can generate a "good enough" facsimile for fractions of a penny? The rise of AI influencers—virtual avatars generating billions in revenue—proves that capital prefers compliant, scalable code over messy, unionizing humans.

Connecting to Course Themes

This shift directly mirrors our recent discussions on Post-humanism and Baudrillard’s Simulacra. We are entering a phase where the simulation (the AI image, the deepfake) is more potent and valuable than the reality it mimics. The map has not just covered the territory; it has replaced it.

Furthermore, we see the cyberpunk theme of commodification of memory. These models are trained on our scraped data—our blogs, our art, our photos. We have been harvested to build the very machines that render us obsolete. It is the ultimate alienation: our collective culture is sold back to us by a subscription API.

Implications: The Human Question

The collapse of this boundary raises terrifying questions. If creativity is just probability management, what is left for us?

Who Benefits? The tech oligarchs holding the keys to the compute power.

Who is Impacted? The "cognitive proletariat"—writers, artists, and knowledge workers whose identity is tied to their output.

We are standing on the precipice of a world where "human-made" becomes a luxury label, a niche artisan category in a sea of synthetic content. The cyberpunk future isn't about cybernetic arms; it's about waking up and realizing the person you're talking to online—and perhaps the art you love—never existed at all.

Sources

Hasson, E. (2024, April 16). Five key takeaways from the 2024 Imperva Bad Bot Report. Imperva. https://www.imperva.com/blog/five-key-takeaways-from-the-2024-imperva-bad-bot-report

Kadet, K. (2024, November 19). Deepfake attempts occur every five minutes amid 244% surge in digital document forgeries. Entrust. https://www.entrust.com/company/newsroom/deepfake-attacks-strike-every-five-minutes-amid-244-surge-in-digital-document-forgeries

University of Richmond Law School. (2025, January 16). The synopsis - AI, art, & the law with Space Opera Theater [Video]. YouTube. https://www.youtube.com/watch?v=kYnXADaTF9A

Page 5 of 5