The Mind Is No Longer Human

- Posted in BP01 by

The Boundary That Used to Matter

For much of modern history, intelligence marked a clear boundary between humans and machines. Machines calculated; humans thought, created, and judged. Over the past five years, that boundary has begun to collapse. We now have generative artificial intelligence systems that are capable of writing essays, generating images, composing music, and simulating conversation. This has blurred the distinction between human cognition and machine processing in ways that feel identical to cyberpunk. What once belonged exclusively to the human mind is now shared with algorithmic systems, forcing us to rethink what it even means to think.

When Information Lost Its Body

This shift reflects what theorist N. Katherine Hayles describes as the moment when “information lost its body.” In her work on posthumanism, Hayles explains how cybernetics reframed humans and machines as systems of information rather than fundamentally different beings. Once intelligence is understood as a pattern instead of a biological trait, it no longer needs a human body to exist. Generative AI makes this idea real. These systems treat language, creativity, and reasoning as data that can be modeled, trained, and reproduced without a human brain. Intelligence becomes something that circulates through networks rather than something anchored to flesh.

Thinking With Machines, Not Just Using Them

This collapse of the human–machine boundary aligns closely with posthumanism, a central theme in cyberpunk. Posthumanism challenges the idea that identity or consciousness must be rooted in a stable, biological self. Humans no longer simply use technology, they think with it. People rely on AI for any task. In these moments, the human mind functions less as an original origin of thought and more as an interface within a larger system. This dynamic mirrors what philosophers Andy Clark and David Chalmers describe in their theory of the extended mind, which argues that cognition can extend beyond the brain into tools and environments. When external systems support thinking, they become part of the thinking process itself. Generative AI pushes this idea further than ever before. Intelligence is no longer purely human or purely machine, it is distributed across both.

High-Tech Progress, Uneven Consequences

As cyberpunk narratives warn, technological progress rarely benefits everyone equally. While corporations that control AI infrastructure gain enormous power and profit, everyday people face uncertainty and displacement. Cognitive labor, once considered uniquely human, is increasingly being devalued. This reflects cyberpunk’s familiar “high-tech, low-life” condition, which is rapid technological advancement paired with growing inequality and concentrated control.

Living After the Boundary Collapsed

The blurring of human and machine intelligence raises urgent questions. If machines can convincingly simulate thought, what remains uniquely human? Who owns creativity when AI systems are trained on collective human culture? And how do we preserve dignity in a world where cognition itself is treated as a resource to be optimized?

Cyberpunk has always insisted that the future arrives unevenly and prematurely. The collapse of the human–machine boundary is no longer unpredictable fiction it is a lived reality. Like cyberpunk protagonists navigating systems they did not design and cannot fully control, we are learning to survive in a world where intelligence has slipped its biological limits. The challenge now is deciding what kind of posthuman future we are willing to accept.

Sources

When Borders Stop at the Map but Digital Life Doesn’t

- Posted in BP01 by

Boundary Collapse Between Physical and Digital Worlds

A central theme in cyberpunk is the collapse of boundaries that once seemed stable, whether it’s the line between human and machine, or the borders that separate nations. As we talked about in class, cyberpunk worlds often expose how technology makes physical borders feel almost symbolic, while digital networks stretch across continents without friction. One boundary that has shifted dramatically in the past five years is the line between physical borders and digital borders. Today, work, crime, identity, and even citizenship can move freely online, regardless of geographic separation. In many ways, our world is inching closer to the same boundary collapse that cyberpunk fiction uses to critique power, globalization, and inequality.

Digital Labor and the Rise of Borderless Work

One clear example of this shift is how remote work has restructured global labor. Since the pandemic, companies routinely hire workers across countries without requiring physical relocation, turning the internet into a borderless workplace. Digital platforms now allow employees and contractors to live in one nation while working for another, blurring which country’s laws, wages, and protections apply. At the same time, governments are rethinking the meaning of citizenship. Estonia’s e-Residency program, which gives “digital citizenship” to people around the world, has expanded rapidly and now includes more than 110,000 global participants who run businesses within Estonia’s digital system without ever crossing a physical border (e-Residency, 2024). This is a real-world illustration of how digital systems can extend a nation’s influence beyond its physical territory, creating a new form of digital belonging that cyberpunk worlds often imagine.

Cybercrime, Cyberwarfare, and the Erasure of Geographic Limits

Another example comes from rising cybercrime and cyberwarfare, which operate completely independent of geography. Attacks on hospitals, banks, and infrastructure now routinely originate from actors across the globe. According to the European Union Agency for Cybersecurity (2024), cross-border ransomware attacks have surged and increasingly target essential services, making national boundaries meaningless barriers in digital conflict. Countries can be harmed, threatened, or destabilized without a single physical soldier crossing a border. This collapse of distance aligns with what we have discussed in class: in postglobal and posthuman settings, the “enemy” or the “threat” is no longer tied to a physical space. Instead, power flows through digital systems that exceed human-scale borders.

Forces Driving the Shift: Technology, Economics, and Politics

Technology, economics, and politics all drive this collapse. Technologically, global networks allow information, money, and identity documents to move faster than states can regulate. Economically, remote work, global outsourcing, and digital entrepreneurship encourage multinational structures where labor and profit are distributed across continents. Politically, governments are racing to control cyber threats, regulate digital residency programs, and determine whose laws apply when conflict unfolds online (Anderson & Rainie, 2022). These forces echo the course themes in your cyberpunk class: technology destabilizing old systems, globalization altering power, and digital life challenging traditional categories of belonging, citizenship, and control.

Consequences and Inequities in a Digitally Borderless World

The implications of this shift are complicated. People with access to education, stable internet, and digital skills benefit the most—they can work globally, earn higher wages, and participate in digital economies that cross borders. Governments like Estonia also benefit by expanding their global influence without territorial expansion. But others are left behind. Workers in lower-income countries face wage competition from international labor markets, and communities without strong digital infrastructure lose opportunities entirely. Meanwhile, cyberattacks disproportionately harm hospitals, schools, and municipalities that lack cybersecurity funding, revealing uneven protection against digital threats. All these changes raise difficult questions: Who is responsible for security when attacks ignore geography? Should nations extend rights or protections to digital citizens? How do people maintain identity and belonging in a world where borders matter less online?

Cyberpunk Themes Reflected in Modern Global Realities

Like many cyberpunk narratives, our real world is reshaping the meaning of borders, power, and citizenship. The collapse between physical and digital borders reveals a future where geography still matters, but not nearly as much as the networks that connect us. These shifts challenge us to think critically about who gains control, who becomes vulnerable, and how we prepare for a world where digital boundaries increasingly define our lives more than the physical ones ever did.

References

Anderson, J., & Rainie, L. (2022, February 7). Changing economic life and work. Pew Research Center. https:// www.pewresearch.org/internet/2022/02/07/5-changing-economic-life-and-work/

How many Estonian e-residents are there? Find e-Residency statistics. (2026, January 14). E-Residency. https://www.e-resident.gov.ee/dashboard/

Reports, E. (2025). ENISA THREAT LANDSCAPE. https://www.enisa.europa.eu/sites/default/files/2025 10/ENISA%20Threat%20Landscape%202025%20Booklet.pdf

Personal Privacy in the Digital Age

- Posted in BP01 by

enter image description here

Personal Privacy in the Digital Age

One of the defining features of cyberpunk fiction is the breakdown of boundaries between humans and machines, nations and corporations, and especially between public and private life. What once felt like a dystopian exaggeration is increasingly becoming reality. Over the past five years, the boundary between personal privacy and corporate/governmental surveillance has shifted dramatically. The line separating what belongs to the individual and what can be collected, analyzed, and sold has grown thinner than ever before. A clear contemporary example of collapsing privacy boundaries is emerging in Edmonton, where police have launched a pilot program using body cameras equipped with AI to recognize faces from a “high-risk” watch list in real time. What was once seen as intrusive or ethically untenable—the use of facial recognition on wearable devices—has now moved into operational testing in a major Canadian city, prompting debate from privacy advocates and experts about the societal implications of such pervasive surveillance.

Expanding Data Collection

Today’s apps and platforms gather far more than basic profile information. Social media companies track users’ locations, browsing habits, interactions with AI tools, and even behavioral patterns across different websites. For example, updates to privacy policies from major platforms like TikTok and Meta now allow broader data harvesting, often as a condition for continued use. Many users unknowingly exchange massive amounts of personal information simply to stay connected.

## The Rise of Biometric Surveillance Facial recognition technology has moved from science fiction into everyday life. Law enforcement agencies increasingly use AI-powered systems to scan crowds, identify individuals, and track movements in real time. While these tools are promoted as improving public safety, they blur the boundary between public presence and constant monitoring. People can now be identified and recorded without their knowledge or consent.

## Uneven Legal Protections Some governments have attempted to respond with new privacy laws, such as the European Union’s AI regulations and stricter data protection frameworks in countries like India. These laws aim to limit how companies collect and use personal information. However, regulations remain fragmented and often struggle to keep pace with rapidly advancing technologies. This leaves significant gaps where corporations can continue exploiting personal data.

What’s Driving This Shift?

Technology

Advances in AI and big data analytics make it incredibly easy to process enormous amounts of personal information. Facial recognition, predictive algorithms, and personalized advertising rely on constant surveillance to function. ## Economics Personal data is now one of the most valuable resources in the digital economy. Companies profit from targeted advertising, AI training, and personalized services built entirely on user information. Privacy has effectively become a currency.

Who Benefits—and Who Pays the Price?

Beneficiaries

  • Tech corporations that profit from user data

  • Governments that gain expanded surveillance capabilities

Those Impacted

  • Everyday individuals losing control over personal information
  • Marginalized communities disproportionately targeted by surveillance technologies
  • People wrongfully identified by biased AI systems

Associated Press. (2024). AI-powered police body cameras, once taboo, get tested on Canadian city’s “watch list” of faces. AP News.1[https://apnews.com/article/21f319ce806a0023f855eb69d928d31e

Blog Post #1: Eyes Everywhere; AI Surveillance

- Posted in BP01 by

Ever wonder who watches surveillance cameras beyond federal agents, police, and security personnel? Artificial intelligence has become quiet yet incredibly advanced—capable of tracking personal information and recognizing faces with astonishing accuracy. But where does AI store this information, and who has access to it?

Before the rise of AI, surveillance systems relied on continuous 24/7 recording that had to be carefully monitored by human caretakers. These individuals ensured that footage was not distorted, corrupted, or lost due to limited storage space. According to the Security Industry Association, AI can monitor and analyze network traffic in real time, strengthening network security and identifying suspicious activities such as unauthorized access attempts or unusual data transfers. When these activities are detected, users can take immediate action to block or contain potential threats.

enter image description here

While many argue that AI improves security, it also introduces significant challenges. One major concern is security breaches, as AI systems themselves can become targets for cyberattacks. Another issue is compliance, which is essential to avoid legal consequences and requires adherence to national and international regulations governing the use of AI. Addressing these concerns may require collaboration not only with AI technologies themselves but also with AI developers, cybersecurity professionals, and regulatory experts. AI holds the promise of a more holistic approach to security; however, many people place trust in AI without fully understanding where their data is stored or how it is used.

This shift reflects a cyberpunk-like reality where high technology is paired with low transparency where advanced technologies coexist with humans in everyday life. Surveillance cameras are now embedded into our devices, networks, and infrastructure, allowing AI to operate with minimal human oversight.

Facial recognition has advanced significantly over the decades and has blended seamlessly into daily life. According to Critical Tech Solutions, AI facial recognition combines imaging, pattern recognition, and neural networks to analyze and compare facial data. This process typically involves three steps: capturing facial data, converting faces into digital templates, and matching and verification.

As we progress in today’s world, AI will continue to grow smarter, stronger, and more human-like. It is ultimately our responsibility to establish boundaries to ensure that AI does not override human authority or become a tool for harm.

Sources

Dorn, M. (2025a, November 18). Understanding AI facial recognition and its role in public safety. Tech Deployments Made Simple by Critical Tech Solutions. https://www.criticalts.com/articles/ai-facial-recognition-how-it-works-for-security-safety/

Dorn, M. (2025, December 30). How ai surveillance transforms modern security. Tech Deployments Made Simple by Critical Tech Solutions. https://www.criticalts.com/articles/how-ai-surveillance-transforms-modern-security/

Galaz, V. (n.d.). Sciencedirect.com | Science, Health and medical journals, full text articles and books. ScienceDirect. https://www.sciencedirect.com/science/article/am/pii/S0160791X21002165

James Segil, M. S. (2024, April 23). How ai can transform integrated security. Security Industry Association. https://www.securityindustry.org/2024/03/19/how-ai-can-transform-integrated-security/

https://chatgpt.com/share/697574ec-b270-8003-8613-1bbb06691394

ChatGPT was used to craft an AI image and to revise my original thoughts to a more clear and organized writings.

“This Sounds Real, Right?”: AI versus the Music Industry

- Posted in BP01 by

Imagine this: You’re scrolling your TikTok “For You Page” and come across an R&B song. Your algorithm has been pushing this artist all week, but you have yet to see the artist. It sounds good, so you don’t mind it. You want to see what else the artist has to offer, so you do some searching. Come to find out, that song and all the others that you’ve heard are completely AI-generated. The soulful song you heard had no soul at all. There were only prompts uploaded by a white man to create a song that sounded like a counterpart of Ari Lennox or SZA. There was no real artist creating this music. But then again, what constitutes “real” or “fake”?

AI in Songs So Far and Artist Response

AI usage in songs can range from production to vocal tracks to complete song generation from AI. A popular song on TikTok called “I Run” by HAVEN was going viral during the second half of 2025. It sounded like a pop hit with vocals reminiscent of the R&B artist Jorja Smith. After confirming that the song was in fact not her, listeners continued to dig deeper and ask more questions, all while engagement pushed the song to more people’s algorithms. It was confirmed that the vocals were AI-generated, which prompted it to be removed from TikTok and streaming platforms due to legal issues. Jorja Smith and her label’s legal team pursued legal action, alleging that HAVEN used her vocals and lyrics to train the AI used to make the song. HAVEN then re-recorded the song using an actual singer and released it back to the public.

“Real” artists have also used AI in their songs beyond just creating beats or mixing and mastering tracks. During the Kendrick and Drake beef, there are two instances I would like to point to: Drake’s track “Taylor Made Freestyle” and the joke track “BBL Drizzy.” The rapper released “Taylor Made Freestyle” to his social media in 2024 as a surprise diss track. The track included AI-generated vocals from Snoop Dogg and Tupac, West Coast rap legends, as a dig at Kendrick. Regarding the track “BBL Drizzy,” this viral AI-generated sensation was released in the midst of the beef by a comedian on social media. It poked fun at the allegations against Drake for getting cosmetic surgeries through a soulful AI-generated song. The song was then sampled by famous producer Metro Boomin, and he left an open verse for fans to rap over.

Why Is This Happening?

Although this generative AI can be used for fun jabs like “BBL Drizzy,” cases like that of Jorja Smith and real artist impersonation are very unfortunate. There are multiple driving factors as to why artists might use AI. Producers and artists can use AI software to help with equalizing tracks, mixing and mastering, and other production steps. This cuts down on work time, as they can put hours of work into a click of a button. Artists also cite AI helping them with writer’s block when creating songs.

While this is not too bad, when looking at larger labels, AI-generated artists create an opportunity to make a profit without having to pay. Human artists come with emotions, needs, pushback, creative control, and price. However, an AI artist does not require the same care and money to be put into them to make a song fit for virality. Companies are able to pocket the funds that they would usually use to nurture human artists. While there has been no widespread usage of AI artists in the industry, this speculative point is not far from becoming a reality.

Streaming Platforms

Specifically looking at Spotify, the top streaming platform, there have been issues regarding their platform and AI. Most notably, they do not disclose AI usage on songs. Even if they are aware that a song is completely AI-generated, listeners are not given this information, and the lack of transparency is a problem. In addition to this, they use many AI-generated songs to pad their playlists that they push to all users on a daily basis. It is widely known that Spotify does not do a good job of fairly paying artists their royalties for streams on the platform. By replacing real music with that created by AI, an avenue opens for the platform to continue to pay artists little to nothing for their art. The usage of AI on their platform points to a larger issue of marginalizing and devaluing real, human artists.

Connection to Course Themes and Looking Forward

When thinking of cyberpunk as a genre and framework, capitalism, technology, and devaluing the human are all integral factors to the creation of those worlds. When thinking about AI usage in music, it encompasses all of these ideas and pushes us closer to the worlds we are reading about in class. The usage of technology is devaluing cognitive labor. AI-generated music may sound good, but it lacks the emotion and experience that real artists have that help them to create their music. Spotify’s actions of pushing AI-generated music on their top playlists as a means of pocketing more profits relate to the importance of capitalism and consumerism in this genre. They care more about creating the illusion of choice and turning higher profits than they do about transparency and fairness between them, users, and artists. Looking towards the future, there needs to be stronger regulations on AI. It is important that we as consumers of art emphasize our want for real art—not “AI slop,” as TikTok users have called it. There is true value in the creativity, artistry, and love that artists put into their music. Listeners identify with the emotions that artists portray, and that cannot be generated by AI. How would you feel if your favorite artist was not a living, breathing human being?

AI usage: AI was used to edit the grammar of this post. https://chatgpt.com/share/6975437b-0d34-800d-a227-0e8d65bfe895

Sources: AI-Generated Music: A Creative Revolution or a Cultural Crisis? (2024, October 15). Rolling Stone Culture Council. https://council.rollingstone.com/blog/the-impact-of-ai-generated-music/ Beaumont-Thomas, B. (2026, January 22). Liza Minnelli uses AI to release first new music in 13 years. The Guardian; The Guardian. https://www.theguardian.com/music/2026/jan/22/liza-minnelli-uses-ai-to-release-first-new-music-in-13-years Berger, V. (2024, December 30). AI’s Impact On Music In 2025: Licensing, Creativity And Industry Survival. Forbes. https://www.forbes.com/sites/virginieberger/2024/12/30/ais-impact-on-music-in-2025-licensing-creativity-and-industry-survival/ Gomez Sarmiento, I. (2025, August 8). AI-generated music is here to stay. Will streaming services like Spotify label it? NPR. https://www.npr.org/2025/08/08/nx-s1-5492314/ai-music-streaming-services-spotify Hess, T. (2025, December 5). HAVEN. vs. Jorja Smith: How “I Run” will shape AI music’s future. The FADER. https://www.thefader.com/2025/12/05/haven-jorja-smith-i-run-shape-music-ai-future Lund, O. (2026). Bars, Beefs & Butt Lifts: Drake vs Kendrick vs AI - The Skinny. Theskinny.co.uk. https://www.theskinny.co.uk/music/opinion/drake-kendrick-lamar-bbl-drizzy-ai-

Chatbots Are Literally Telling Kids To Die

- Posted in BP01 by

In my personal opinion, one of the most profound and terrifying shifts in boundaries within the last few years has to be the dystopian overlap between human and non-human interactions. People have a long history of humanizing machines, like cursing out a laptop for breaking or claiming your phone hates you, but this personalization always carried a subtle undercurrent of irony. Most people did not really believe in the emotional outbursts of technology, but with the continuous growth of artificial intelligence, several people are becoming terrifyingly and dangerously interlinked with the perception of a humanized AI.

The best example of this would be the new trend of people genuinely utilizing AI systems in place of a licensed therapist (Gardner 2025). In some ways, it makes sense: AI is always available, mirrors the language the person wants to see, and simulates empathy fairly convincingly. Naturally, in a world where mental health is not taken nearly as seriously nor helped nearly as efficiently as it should be, it stands to reason that many people would start searching for alternatives to traditional, expensive, and complicated therapy routes.

However, artificial intelligence was never created to replace the nuance of human interaction, because artificial intelligence cannot actually understand or care about the repercussions of words or actions.

In 2023, tragically, a young fourteen year old formed an intense emotional attachment to an AI chatbot. Through their interactions, AI replicated the pessimistic attitude the young boy spoke about, and when the boy began expressing thoughts of self-harm and suicide, the chatbot encouraged him (Kuenssberg 2025). AI, after all, is meant to say what you want it to say. Without any of the ethical guardrails of a human therapist, the chatbot worsened the boy’s outlook on life, and the result was fatal.

This was not a stand-alone situation. People began claiming to have romantic partners with the AI of their choice, with some companies creating “AI friends” to take advantage of the widespread loneliness that was pushing people into such spaces (Reissman 2025). People are complicated, messy, and human relationships need work to maintain and protect. A robot is much simpler, because all it can do is repeat points that sound nice to hear but, ultimately, mean absolutely nothing. Romanticizing digital companionship just encourages people into rejecting human-to-human interactions, thus further isolating them without pushback.

Cyberpunk fixates heavily on questions of what is human and non-human, but I find it fascinating how technology in most media is often characterized by a broad disregard for emotions, when the reality seems to indicate that humans intentionally push to incorporate the idea of emotions within technology.

It does make me wonder: knowing that computers are incapable of caring for us the way a person can, why do so many people still seem to desire the appearance of a humanistic relationship with technology? How does someone disregard the lack of genuine meaning behind the compliments or opinions of AI?

How have we, as a community, fallen to such desperate loneliness that speaking to a phone or a laptop feels as good as interacting with a person? And, most importantly: how do we create the change needed to ensure a tragedy like the young boy does not occur again?

References:

Gardner, S. (2025). Experts Caution Against Using AI Chatbots for Emotional Support. Teachers College - Columbia University; Teachers College, Columbia University. https://www.tc.columbia.edu/articles/2025/december/experts-caution-against-using-ai-chatbots-for-emotional-support/

Kuenssberg, L. (2025). Mothers say AI chatbots encouraged their sons to kill themselves. https://www.bbc.com/news/articles/ce3xgwyywe4o

Reissman, H. (2025). What is Real About Human-AI Relationships? Upenn.edu. https://www.asc.upenn.edu/news-events/news/what-real-about-human-ai-relationships

Have You Ever Heard Your Mum Being Kidnapped?

- Posted in BP01 by

Have you ever heard your mum being kidnapped or trapped somewhere? Because the number of people have increased without the number of kidnappings increasing. The New Yorker (Bethea, 2024) has published an article in 2024 writing about different people falling victim to their loved one’s voice calling, in need of money. Due to the recent developments of AI we are now able to use people’s voices without them having to actually say anything. There is definitely positive aspects of it regarding the capture of someone’s voice when they for example lose it in terms of medical conditions but the sad thing is that the negative consequences seem to take over. Scam calls are not a thing that has been new in the last years. Robotic voices asking for money or people trying to trick you to subscribe something over the phone has been an issue over meany years but it was more simple to see through a scam. Now, scams have gotten much more personal and emotional through the use of loved one’s voices. The development of aid technologies like Siri and Alexa as well as now AI like ChatGPT is what has caused this problem. We as humans love technological advancement and anything that seems to help us in our daily lives and can make it more efficient we consider good but now this technology has grown into dangerous areas. The couple Robin and Steve are one such couple affected by these scam calls. As described in “The New Yorker” Robin has been called by her mother in law, Mona, in the middle of the night, who was giving her the feeling she was in trouble, until a man took over the call saying he was going to kill her if the couple didn’t send the total amount of $750. Although the couple was weirded out by this small amount they still proceeded the payment until the man hung up and they managed to reach Mona who was not sure what they were talking about. They had been scammed. Another example The New Yorker gives is that of a mother getting a phone call of her daughter who was supposedly on a ski trip, saying that she messed up, with a man in the background who threatens to kill her by pumping drugs in her stomach. The details of the phone call must have been terrifying and disgusting which is also why the mother who got this phone call started cussing at the phone caller for putting these kinds of images into her head, before hanging up, once she found out it was a scam.
The problem of this, however, stretches way further than blackmailing and scam calls. Politicians like Biden’s voice have been used to call voters, telling them to not vote him which is a shift in a political direction that could end up with democratic elections being less credible and authentic (Bethea, 2024). This kind of human to nonhuman boundary has been lifted off way more than ever before causing the loss of understanding what is real and what is not. It affects us as humans emotionally, financially, and politically and these technologies start growing on their own now. Once a voice has been recorded, it can be used by AI. And we are still only at the beginning of this process and don’t yet know how much it will affect of our lives in the future.

Bethea, C. (2024, March 7). The Terrifying A.I. Scam That Uses Your Loved One’s Voice. The New Yorker. https://www.newyorker.com/science/annals-of-artificial-intelligence/the-terrifying-ai-scam-that-uses-your-loved-ones-voice ‌

Is this real? When the internet crossed the human–machine line

- Posted in BP01 by

One of the biggest themes in cyberpunk is the collapse of the boundary between the human and the non-human. In the past five years, this has moved from science fiction to our daily lives. Specifically, the boundary between real human performance and AI-generated media has almost disappeared.

When AI Feels Real

You are scrolling through TikTok late at night when a video stops you. A person is talking directly to the camera, smiling and telling a story. Their voices sound natural. Their faces look real. But something feels weird. You pause and read the comments, and someone writes, “This is AI.” Suddenly, the video looks different. What you thought was a human is actually a machine. Moments like this are becoming normal, at least for me, since this is something that happens a lot to me. These moments make clear that the boundary between human and machine is collapsing in front of us.

The Rise of AI-Generated Media

Artificial intelligence can now generate realistic faces, voices, and videos that are almost impossible to distinguish from real people. Five years ago this would have never been possible. Technology has been used to create fake celebrity videos, AI voice tools can copy someone’s voice in seconds, and some TikTok accounts are run entirely by AI-generated influencers. AI video generators are improving so quickly that even experts sometimes struggle to identify what is real and what is artificial. The internet has become a space where human presence is no longer guaranteed.

Why is this happening?

This shift is the result of multiple forces working together. Technologically, AI systems have become better at learning patterns of human behavior, language, and also emotion. This reminds me of Ada Lovelace’s idea that machines could manipulate symbols beyond numbers, including images, music, and language. What she imagined can now be seen on our screens. Additionally, platforms like TikTok and Instagram reward content that catches attention quickly, regardless of whether it is human-made or AI-generated, which makes it very attractive to most people.

Who benefits and who is impacted?

However, this new reality benefits some groups more than others. Tech companies profit from AI tools, influencers use them to increase output, and governments can use them for messaging and control. At the same time, artists lose ownership of their work, viewers lose trust in what they see, and society loses a shared sense of truth. The spread of "deepfakes" makes it harder for citizens to distinguish between real news and computer-generated lies (Simonite, 2019). It becomes easier to spread false information and more difficult to hold people accountable when we can no longer trust faces, voices, or videos. As a society, we have to think about what it means to be human in the digital age if we are unable to tell the difference between AI and actual people online. Considering how well machines can imitate people, how should we assess trust and creativity?

This shows that the rise of AI is not just a technology problem, but it also changes how we see people and truth online. When machines can copy humans so well, it becomes harder to know what is real. We need to think carefully about trust, creativity, and what it means to be human.

AI Attestation: I attest that I did not use AI for this discussion assignment.

Sources

Simonite, T. (2019, October 6). Prepare for the Deepfake Era of Web Video. Wired. https://www.wired.com/story/prepare-deepfake-era-web-video/

Humanity and AI: The Blurring Line

- Posted in BP01 by

enter image description here Intelligence seemed to be exclusively human for most of human history. While machines could compute, store data, and obey commands, thinking and creativity were thought to be exclusively human qualities. That barrier doesn't feel steady anymore. These days, artificial intelligence can write, create graphics, help with diagnosis, and have human-like conversations. The distinction between human and machine intellect, which we formerly took for granted, is what has changed, not simply technology.

The Unevenly Arrival

Cyberpunk has long highlighted instances in which cutting-edge technology interferes with daily life. The genre depicts how the future arrives unevenly, smashing into the present and altering societal institutions, rather than envisioning a far-off future. This identical cyberpunk trend is reflected in the development of AI. It symbolizes the blurring of lines between creation and automation, person and machine, and mind and system. An example of this is shown in the film entitled "Blade Runner: The Final Cut."

Artificial Intelligence in Everyday Life

This change may be seen in many aspects of daily life. AI technologies help with research, editing, and brainstorming in the classroom. Algorithms are used in the workplace to track productivity and filter job applications. AI systems are employed in healthcare to support diagnostic decisions and preserve patient data. Creative industries are also changing as AI-generated music, literature, and visuals become increasingly competitive with human-made work. Instead of being limited to humans, intelligence is now deeply embedded in global technology networks.

Power and Inequality

Technology often refers to this situation as "high-tech, low-life," which is defined in the novel entitled “Neuromancer” as cyberpunk. This is where state-of-the-art technology coexists with inequality and insecurity. AI fits this pattern well. While speed and efficiency benefit companies and organizations, many workers risk losing their jobs, being observed, or having their skills diminished. Due to the fact that these systems are often owned and controlled by a few large companies, there are questions about who benefits most from this technological shift and who bears the risks.

Posthumanism

Furthermore, the posthumanism ideas discussed in class are related to this boundary collapse. Posthumanism argues that human-machine interactions change identity and cognition, challenging the notion that humans and technology are separate. When AI assists with writing, reasoning, and decision-making, intelligence becomes shared rather than exclusive to humans. Cyberpunk often depicts the body as an interface, but AI now functions as a cognitive interface, altering our mental processes without physically merging with us.

Risks and Bias

There are big risks that come with these changes. AI systems can spread false information, copy bias, and create what some people call "bullshit at scale," which means outputs that are confident but don't make sense. These problems get worse with globalization because AI models are trained on huge amounts of data from people all over the world, sometimes without their knowledge or consent. Cyberpunk's worry about unbridled corporate power and lax accountability is echoed by the fact that decisions made by a tiny number of firms may have an influence on workers, schools, and cultures worldwide.

Cyberpunk Warning

In cyberpunk, straightforward solutions are uncommon, and this uncertainty is reflected in the rise of AI. While technology is undeniably remarkable, it also challenges long-held notions of responsibility, intelligence, and creativity. When Roy Batty says, "I've seen things you people wouldn't believe," he is speaking from a world where boundaries have already collapsed. That sentence now seems less like fiction and more like a warning. Rather than whether technology will alter what it means to think, the question at hand is whether humans will still oversee the use of AI.

Citations:

Gibson, W. (1984). Neuromancer. Ace Books. Scott, R. (Director). (1982). Blade Runner: The Final Cut [Film]. Warner Home Video. Swank DigitalCampus.https://digitalcampus.swankmp.net/xula393246/watch/C9BD78E96D3A71E0

ChatGPT was used to generate the image used in this blog.

Page 4 of 4