Have You Ever Heard Your Mum Being Kidnapped?

- Posted in BP01 by

Have you ever heard your mum being kidnapped or trapped somewhere? Because the number of people have increased without the number of kidnappings increasing. The New Yorker (Bethea, 2024) has published an article in 2024 writing about different people falling victim to their loved one’s voice calling, in need of money. Due to the recent developments of AI we are now able to use people’s voices without them having to actually say anything. There is definitely positive aspects of it regarding the capture of someone’s voice when they for example lose it in terms of medical conditions but the sad thing is that the negative consequences seem to take over. Scam calls are not a thing that has been new in the last years. Robotic voices asking for money or people trying to trick you to subscribe something over the phone has been an issue over meany years but it was more simple to see through a scam. Now, scams have gotten much more personal and emotional through the use of loved one’s voices. The development of aid technologies like Siri and Alexa as well as now AI like ChatGPT is what has caused this problem. We as humans love technological advancement and anything that seems to help us in our daily lives and can make it more efficient we consider good but now this technology has grown into dangerous areas. The couple Robin and Steve are one such couple affected by these scam calls. As described in “The New Yorker” Robin has been called by her mother in law, Mona, in the middle of the night, who was giving her the feeling she was in trouble, until a man took over the call saying he was going to kill her if the couple didn’t send the total amount of $750. Although the couple was weirded out by this small amount they still proceeded the payment until the man hung up and they managed to reach Mona who was not sure what they were talking about. They had been scammed. Another example The New Yorker gives is that of a mother getting a phone call of her daughter who was supposedly on a ski trip, saying that she messed up, with a man in the background who threatens to kill her by pumping drugs in her stomach. The details of the phone call must have been terrifying and disgusting which is also why the mother who got this phone call started cussing at the phone caller for putting these kinds of images into her head, before hanging up, once she found out it was a scam.
The problem of this, however, stretches way further than blackmailing and scam calls. Politicians like Biden’s voice have been used to call voters, telling them to not vote him which is a shift in a political direction that could end up with democratic elections being less credible and authentic (Bethea, 2024). This kind of human to nonhuman boundary has been lifted off way more than ever before causing the loss of understanding what is real and what is not. It affects us as humans emotionally, financially, and politically and these technologies start growing on their own now. Once a voice has been recorded, it can be used by AI. And we are still only at the beginning of this process and don’t yet know how much it will affect of our lives in the future.

Bethea, C. (2024, March 7). The Terrifying A.I. Scam That Uses Your Loved One’s Voice. The New Yorker. https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice ‌

Is this real? When the internet crossed the human–machine line

- Posted in BP01 by

One of the biggest themes in cyberpunk is the collapse of the boundary between the human and the non-human. In the past five years, this has moved from science fiction to our daily lives. Specifically, the boundary between real human performance and AI-generated media has almost disappeared.

When AI Feels Real

You are scrolling through TikTok late at night when a video stops you. A person is talking directly to the camera, smiling and telling a story. Their voices sound natural. Their faces look real. But something feels weird. You pause and read the comments, and someone writes, “This is AI.” Suddenly, the video looks different. What you thought was a human is actually a machine. Moments like this are becoming normal, at least for me, since this is something that happens a lot to me. These moments make clear that the boundary between human and machine is collapsing in front of us.

The Rise of AI-Generated Media

Artificial intelligence can now generate realistic faces, voices, and videos that are almost impossible to distinguish from real people. Five years ago this would have never been possible. Technology has been used to create fake celebrity videos, AI voice tools can copy someone’s voice in seconds, and some TikTok accounts are run entirely by AI-generated influencers. AI video generators are improving so quickly that even experts sometimes struggle to identify what is real and what is artificial. The internet has become a space where human presence is no longer guaranteed.

Why is this happening?

This shift is the result of multiple forces working together. Technologically, AI systems have become better at learning patterns of human behavior, language, and also emotion. This reminds me of Ada Lovelace’s idea that machines could manipulate symbols beyond numbers, including images, music, and language. What she imagined can now be seen on our screens. Additionally, platforms like TikTok and Instagram reward content that catches attention quickly, regardless of whether it is human-made or AI-generated, which makes it very attractive to most people.

Who benefits and who is impacted?

However, this new reality benefits some groups more than others. Tech companies profit from AI tools, influencers use them to increase output, and governments can use them for messaging and control. At the same time, artists lose ownership of their work, viewers lose trust in what they see, and society loses a shared sense of truth. The spread of "deepfakes" makes it harder for citizens to distinguish between real news and computer-generated lies (Simonite, 2019). It becomes easier to spread false information and more difficult to hold people accountable when we can no longer trust faces, voices, or videos. As a society, we have to think about what it means to be human in the digital age if we are unable to tell the difference between AI and actual people online. Considering how well machines can imitate people, how should we assess trust and creativity?

This shows that the rise of AI is not just a technology problem, but it also changes how we see people and truth online. When machines can copy humans so well, it becomes harder to know what is real. We need to think carefully about trust, creativity, and what it means to be human.

AI Attestation: I attest that I did not use AI for this discussion assignment.

Sources

Simonite, T. (2019, October 6). Prepare for the Deepfake Era of Web Video. Wired. https://www.wired.com/story/prepare-deepfake-era-web-video/

Humanity and AI: The Blurring Line

- Posted in BP01 by

enter image description here Intelligence seemed to be exclusively human for most of human history. While machines could compute, store data, and obey commands, thinking and creativity were thought to be exclusively human qualities. That barrier doesn't feel steady anymore. These days, artificial intelligence can write, create graphics, help with diagnosis, and have human-like conversations. The distinction between human and machine intellect, which we formerly took for granted, is what has changed, not simply technology.

The Unevenly Arrival

Cyberpunk has long highlighted instances in which cutting-edge technology interferes with daily life. The genre depicts how the future arrives unevenly, smashing into the present and altering societal institutions, rather than envisioning a far-off future. This identical cyberpunk trend is reflected in the development of AI. It symbolizes the blurring of lines between creation and automation, person and machine, and mind and system. An example of this is shown in the film entitled "Blade Runner: The Final Cut."

Artificial Intelligence in Everyday Life

This change may be seen in many aspects of daily life. AI technologies help with research, editing, and brainstorming in the classroom. Algorithms are used in the workplace to track productivity and filter job applications. AI systems are employed in healthcare to support diagnostic decisions and preserve patient data. Creative industries are also changing as AI-generated music, literature, and visuals become increasingly competitive with human-made work. Instead of being limited to humans, intelligence is now deeply embedded in global technology networks.

Power and Inequality

Technology often refers to this situation as "high-tech, low-life," which is defined in the novel entitled “Neuromancer” as cyberpunk. This is where state-of-the-art technology coexists with inequality and insecurity. AI fits this pattern well. While speed and efficiency benefit companies and organizations, many workers risk losing their jobs, being observed, or having their skills diminished. Due to the fact that these systems are often owned and controlled by a few large companies, there are questions about who benefits most from this technological shift and who bears the risks.

Posthumanism

Furthermore, the posthumanism ideas discussed in class are related to this boundary collapse. Posthumanism argues that human-machine interactions change identity and cognition, challenging the notion that humans and technology are separate. When AI assists with writing, reasoning, and decision-making, intelligence becomes shared rather than exclusive to humans. Cyberpunk often depicts the body as an interface, but AI now functions as a cognitive interface, altering our mental processes without physically merging with us.

Risks and Bias

There are big risks that come with these changes. AI systems can spread false information, copy bias, and create what some people call "bullshit at scale," which means outputs that are confident but don't make sense. These problems get worse with globalization because AI models are trained on huge amounts of data from people all over the world, sometimes without their knowledge or consent. Cyberpunk's worry about unbridled corporate power and lax accountability is echoed by the fact that decisions made by a tiny number of firms may have an influence on workers, schools, and cultures worldwide.

Cyberpunk Warning

In cyberpunk, straightforward solutions are uncommon, and this uncertainty is reflected in the rise of AI. While technology is undeniably remarkable, it also challenges long-held notions of responsibility, intelligence, and creativity. When Roy Batty says, "I've seen things you people wouldn't believe," he is speaking from a world where boundaries have already collapsed. That sentence now seems less like fiction and more like a warning. Rather than whether technology will alter what it means to think, the question at hand is whether humans will still oversee the use of AI.

Citations:

Gibson, W. (1984). Neuromancer. Ace Books. Scott, R. (Director). (1982). Blade Runner: The Final Cut [Film]. Warner Home Video. Swank DigitalCampus.https://digitalcampus.swankmp.net/xula393246/watch/C9BD78E96D3A71E0

ChatGPT was used to generate the image used in this blog.

The Death of the "Real"

- Posted in BP01 by

In the neon-soaked sprawl of William Gibson’s Neuromancer, the line between the organic and the digital is a porous membrane, constantly punctured by neural jacks and construct personalities. We often treat cyberpunk as a warning of a distant, dystopian future. We are wrong. It is a diagnosis of our present. The most significant boundary collapse of the last five years is not a geopolitical border dissolving, but the erosion of the human/synthetic divide—specifically, the collapse of human exclusivity in creativity and truth.

The Shift: The Synthetic Takeover (2022–2025)

For centuries, "creation" was the final fortress of humanity. Machines could weave cloth or assemble cars, but they could not dream. That boundary evaporated in August 2022, when Jason M. Allen’s Théâtre D’opéra Spatial took first place at the Colorado State Fair. It wasn't painted by a brush; it was hallucinated by Midjourney. The controversy that followed—artists crying foul, the U.S. Copyright Office refusing protection because "human authorship" was absent—marked the moment the definition of "artist" fractured.

Since then, the breach has widened into a chasm:

The "Dead Internet" is Here: As of 2024, reports indicate that automated bots and AI agents now generate a massive plurality of web traffic. We are increasingly screaming into a void populated by echoes of ourselves.

The Liar’s Dividend: The explosion of deepfakes (rising 244% in 2024 according to Entrust) has created an epistemological crisis. We have moved beyond "fake news" to "fake reality." When a CEO’s voice can be cloned to authorize million-dollar transfers, or a political candidate’s face grafted onto incriminating footage, the boundary between evidence and fabrication is gone.

The Drivers: Why Now?

This collapse wasn't an accident; it was engineered by the convergence of technological capability and surveillance capitalism.

Technology: The Transformer architecture (the "T" in GPT) allowed machines to stop acting like calculators and start acting like pattern-completers, digesting the sum total of human expression to mimic our "soul."

Economics: This is classic "High Tech, Low Life." The driving force is the ruthless efficiency of the market. Why pay a graphic designer, a copywriter, or an influencer when an algorithm can generate a "good enough" facsimile for fractions of a penny? The rise of AI influencers—virtual avatars generating billions in revenue—proves that capital prefers compliant, scalable code over messy, unionizing humans.

Connecting to Course Themes

This shift directly mirrors our recent discussions on Post-humanism and Baudrillard’s Simulacra. We are entering a phase where the simulation (the AI image, the deepfake) is more potent and valuable than the reality it mimics. The map has not just covered the territory; it has replaced it.

Furthermore, we see the cyberpunk theme of commodification of memory. These models are trained on our scraped data—our blogs, our art, our photos. We have been harvested to build the very machines that render us obsolete. It is the ultimate alienation: our collective culture is sold back to us by a subscription API.

Implications: The Human Question

The collapse of this boundary raises terrifying questions. If creativity is just probability management, what is left for us?

Who Benefits? The tech oligarchs holding the keys to the compute power.

Who is Impacted? The "cognitive proletariat"—writers, artists, and knowledge workers whose identity is tied to their output.

We are standing on the precipice of a world where "human-made" becomes a luxury label, a niche artisan category in a sea of synthetic content. The cyberpunk future isn't about cybernetic arms; it's about waking up and realizing the person you're talking to online—and perhaps the art you love—never existed at all.

Sources

Hasson, E. (2024, April 16). Five key takeaways from the 2024 Imperva Bad Bot Report. Imperva. https://www.imperva.com/blog/five-key-takeaways-from-the-2024-imperva-bad-bot-report

Kadet, K. (2024, November 19). Deepfake attempts occur every five minutes amid 244% surge in digital document forgeries. Entrust. https://www.entrust.com/company/newsroom/deepfake-attacks-strike-every-five-minutes-amid-244-surge-in-digital-document-forgeries

University of Richmond Law School. (2025, January 16). The synopsis - AI, art, & the law with Space Opera Theater [Video]. YouTube. https://www.youtube.com/watch?v=kYnXADaTF9A

Page 3 of 3