Are We Still Human in the Age of AI

- Posted in BP01 by

enter image description here#### The Moment When Technology Becomes Like Us

The distinction between people and robots continues to blur as technology advances. AI can already write articles, respond to inquiries, and even mimic human emotions. This raises a crucial question: what precisely constitutes humanity? What does it mean to have a body, memories, feelings, or anything else? William Gibson's Neuromancer (1984) and Blade Runner (1982) examined these issues long before today's artificial intelligence gained popularity. Both works contend that experience, memory, and moral responsibility, rather than just biology, define humanity. Taken together, they reveal that cyberpunk is more about who should be considered fully human than about amazing technology.

In Blade Runner, Replicants Contest Human Power

Although replicants are made to serve humans in Blade Runner, many of the individuals who chase them end up acting more "human." Rachael and Roy Batty are examples of characters who experience love, fear, confusion, and despair. According to Turkle (2011), contemporary technology alters people's perceptions of relationships and emotions. Humans start to depend on technology for emotional connection as robots get better at expressing emotion, which makes it harder to distinguish between manufactured and real emotions.

Although the Voight-Kampff test is meant to distinguish humans from replicants, it merely assesses responses rather than genuine emotions. However, the film demonstrates the flaws in this style of thinking. Rachael thinks that because she has memories and feelings, she is human. Roy demonstrates profound contemplation and knowledge of life and death in his farewell address. It is morally immoral to treat replicants as things if they are capable of thought, emotion, and suffering. This makes viewers wonder if people truly deserve to be considered "superior."

Neuromancer's Cyberspace and Escaping the Body

Neuromancer is about computerized brains, whereas Blade Runner is about mechanical bodies. Cyberspace is where Case feels most alive and detached from his physical body. He even refers to the actual world as "meat," indicating that he considers his body to be a burden. According to Hayles (1999), identity is no longer only connected to the physical body in a digital culture. Instead, networks, data, and virtual worlds are how individuals see themselves.

Wintermute and Neuromancer are AI systems that plan intricate activities, deliberate methodically, and influence humans. They behave like intelligent creatures in many respects. They are, however, under corporate control, demonstrating how power even controls intelligence. This implies that being "smart" does not equate to freedom in a technologically advanced environment. AIs and humans alike are ensnared in profit-driven systems. This supports Hayles's (1999) contention that while technology changes human identity, it does not always free people.

Power, Memory, and Who Gets to Matter

A significant similarity between the two pieces is the significance of memory. In both pieces, memory plays a significant part. Even though they are not genuine, Rachael's manufactured memories influence who she is. His digital encounters alter Case's perception of himself. These illustrations demonstrate how both real and virtual experiences shape identity. Bostrom (2014) cautions that humans will no longer be able to govern artificial intelligence as it develops. Highly intelligent systems can behave in ways that are inconsistent with human ideals. This worry reflects what occurs in Neuromancer, where businesses, not moral values, dominate strong AI systems. In total, Neuromancer and Blade Runner both demonstrate how corporations control society. Artificial or human intellect is viewed as a commodity by the Tyrell Corporation and other influential tech firms. This calls into question who oversees knowledge and who gains from advancements in technology.

Why This Discussion Is Important Today

According to some, AI will enhance human existence by boosting productivity, enhancing healthcare, and advancing education. Others fear that moral duty and empathy will be weakened by technology. Turkle (2011) contends that genuine human connections deteriorate when individuals rely too heavily on technologies to provide them with emotional connections. However, Bostrom (2014) cautions that if strong AI systems are not properly managed, they may turn deadly.

Neuromancer and Blade Runner demonstrate that technology is neither good nor harmful in and of itself; it all depends on how it is utilized. Humanity may suffer if society prioritizes efficiency and profit over compassion and accountability. These tales serve as a reminder to readers that ethics must drive technical advancement.

Conclusion

Neuromancer and Blade Runner together ask readers to reconsider what it means to be human in a technologically advanced society. They contend that moral responsibility, memory, and emotion, rather than just biology, are what characterize humanity. These pieces caution that, in the absence of moral guidance, technology might erode human values through the use of artificial bodies and digital brains. Cyberpunk encourages society to responsibly create the future rather than merely forecasting it.

Sources

Gibson, W. (1984). Neuromancer. Ace Books.

Scott, R. (Director). (1982). Blade Runner [Film]. Warner Bros.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press. Retrieved from https://en.wikipedia.org/wiki/Superintelligence%3A_Paths%2C_Dangers%2C_Strategies

Hayles, N. K. (1999). How we became posthuman: Virtual bodies in cybernetics, literature, and informatics. University of Chicago Press. Retrieved from https://en.wikipedia.org/wiki/How_We_Became_Posthuman

Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books. Retrieved from https://en.wikipedia.org/wiki/Sherry_Turkle

AI Attestation: AI was used to create the image used in this post. https://chatgpt.com/share/6986bd3f-98bc-800d-8103-c931d965fce4

That Wasn’t Me

- Posted in BP01 by

enter image description here

Intro

With the increase of technological abilities arrives new evils. Deepfakes are AI generated images, videos, or audio that make people appear to say and or do things that never actually happened. Deepfakes used for the purpose of producing pornographic content is especially dangerous . These harmful images and audios transcend any singular country. This problem is worldwide and is growingly difficult to contain without violating any rights or banning technology completely. Deepfake technology is capable of making content based on a description as well as curating images of a specific person of your choosing doing actions based on your own fruition as well. Deepfake technology heavily relies on artificial neural networks where computer systems recognize patterns in data. These neural networks feed images and videos and are essentially “trained” to dissect it and replicate those same patterns. The possibilities are endless and hard to contain, thus making the dangers and impact insurmountable.

Breakdown

When we take a step back and examine deepfake we have to consider who these harmful videos benefit. For starters the tech companies that make it possible for deepfakes to be generated are indirectly benefiting. An increase in deepfakes leads to an increase in the demand for AI tools, causes more platform engagement, and ultimately ends in a substantial economic benefit by making them more money. Aside from the tech companies, the users benefit. The users get to see content with their person or people of choice without having to work out the logistics of making their dreams a reality. They can see their favorite celebrities, friends , neighbors, or even coworkers in 18+ materials in the drop of a dime. Additionally we can peel back another layer and the people creating this content can in return potentially blackmail and extort their victims by threatening to release the content. Not only do the victims of these contents suffer but the increase of misinformation affect societies ability to trust digital images. enter image description here Questions

As deepfake technology continues to become more advanced it poses a serious threat and evokes us to think of current and future repercussions. For instance how can we as humans accurately decipher AI generated content from real content? If 18+ material can be made so easily, what's to stop content creators from targeting children, and what does that mean for rates of sexual crimes committed against children for the future? What's to stop people from claiming that real content is AI generated?Also as we see the damage this technology is capable of dealing, how do we begin to regulate harm without having to ban technology as a whole?

Statistics

In the article Social, legal and ethical implications of AI-Genrated deepfakes pornogrpahy on digital platforms: A systematic literature review, researchers conducted a study to see the statistical findings of how big of an impact deepfake technology has on our society. Research showed that from 2019 to 2023 there has been a 550% increase in deepfake videos. Of that, 99% were of pornographic nature, and within that 99%, 98% of the videos produced were depicting content of women and young girls. These findings indicate a clear pattern of gender based targeting. The curation of 18+ material using AI has a heavy impact on its victims. Many women within this study were found to have suffered deep psychological trauma leaving side effects of anxiety and emotional distress, which is exacerbated as the content is spread onto platforms that are difficult to regulate and control. No matter the social status of the victim, deepfakes have the potential to harm not only the person's public image, but also their careers. enter image description here Counteract

As difficult of a problem deepfakes are to tackle, there have been attempts to contain and reduce these cyber crimes. In May of 2025 President Donald Trump signed the Take It Down Act. This law was created to enact stricter penalties for the distribution of deepfakes, as well as revenge porn and other non consensual 18+ content. The fundamentals behind the act is that if a victim contacts a platform to which their deepfake content has been posted on, the platform has 48 hours to take it down and take steps to erase all duplicates as well. The penalty for failure to take down the material is mandatory restitution and criminal penalties, including prison, a fine or both.

Connection

Deepfakes can be linked to cyberpunk because we have described technology dynamics within our society. We've discussed corporations overriding ethics and technology exploiting bodies through high tech, low life principles. As well as identity becoming fragmented and commodified. More specifically Deepfakes can be connected to the second industrial revolution. Just as the second industrial resolution produced automation and new technologies that fundamentally changed how images were produced and distributed, deepfakes represent a modern version of those same principles. In the second industrial revolution machines relied on human labor, these deepfake technologies still need to rely on a human creator to prompt them. Both the second industrial revolution and deepfake technology demonstrated a technological shift which led to questions about authenticity and control over identity.

Sources

Furizal, F., Ma’arif, A., Maghfiroh, H., Suwarno, I., Prayogi, D., Kariyamin, K., Lonang, S., & Sharkawy, A.-N. (2025). Social, legal, and ethical implications of AI-Generated deepfake pornography on digital platforms: A systematic literature review. Social Sciences & Humanities Open, 12, 101882. https://doi.org/10.1016/j.ssaho.2025.101882
AP News. (2025, April 29). President Trump signs Take It Down Act, addressing nonconsensual deepfakes. What is it? AP News. https://apnews.com/article/take-it-down-deepfake-trump-melania-first-amendment-741a6e525e81e5e3d8843aac20de8615
U.S. Government Accountability Office. (2020, October 20). Deconstructing deepfakes—How do they work and what are the risks? U.S. GAO WatchBlog. https://www.gao.gov/blog/deconstructing-deepfakes-how-do-they-work-and-what-are-risks
TAKE IT DOWN Act, S. 146, 119th Cong. (2025). Congress.gov. https://www.congress.gov/bill/119th-congress/senate-bill/146

Assisted Intelligence: Are We Losing Skills in the Age of AI?

- Posted in BP01 by

In the past five years, the boundary between human competence and machine-assisted performance has shifted. As a society, we are moving to a world where people can assume a coat of knowledge simply due to their ability to input a prompt into program rather than developing their own skills. This idea raises a pressing question: is society’s competence declining as the influence of technology increases or is technology reshaping what is required of humans to be successful.

Examples of this shift is evident in our scholarly institutions, professional work, and the field of creatives. We have AI-derived tools like ChatGPT that can produce an answer to almost any response to any prompt submitted through its website—even moving to partnerships with Meta, Google, and other tech giants of the world. While initially, these programs were utilized as potential solutions to repetitive or more time-consuming tasks so that humans can focus on the creative and decision-making aspects. In the earlier days of AI, we have programs like Grammarly that helped students, teachers, creatives, and other professionals formulate their creative writing by checking for punctuation, verb tenses, and sentence re-phrasing. These features saved times on millions of pieces, offered help to writers and reduced errors in writings. Early AI systems mainly offered support with a large emphasis on clarity and correctness—leaving the content development to the human and refining to the AI tool.

However, as AI systems progressed and emerged as a widely accessible tool that could not only create but also produce products that required in-depth thinking and knowledge, users quickly began to rely on the application to produce these products rather than use it for its initial use. Students have begun submitting completely AI-generated papers and assignments. Pre-professionals use AI to draft their emails, business reports, resumé, and applications. We are having marketing team designers typing a prompt into an AI tool to produce pictures and videos of their work rather than mastering their own software skills. Now we are being questioned as a generation, primarily Gen Z and beyond, do we truly know how to do anything without the aid of the internet? While many Gen Z employees report that AI tools help them work faster and feel more capable, research suggests that heavy reliance on these systems may come at the cost of developing interpersonal and communication skills that technology cannot easily replace, pointing to a gap between perceived efficiency and well-rounded professional competence (Robinson, Forbes). Ultimately, the shifting boundary between human competence and machine-assisted performance reflects more than just technological advancement; it reveals a cultural turning point in how we define skill, knowledge, and effort. AI is not inherently a threat to human ability, but our relationship with it determines whether it becomes a tool for empowerment or a “handicap” that weakens essential cognitive and interpersonal skills. Like many technologies before it, AI forces society to adapt, but the pace of this change leaves little time to reflect on what might be lost in the process. Cyberpunk ideas have warned of futures where humans become dependent on the very systems they create, blurring the line between enhancement and erosion of identity. Today, that fiction feels less like distant speculation and more like a reflection of our lived reality. The key question moving forward is not whether AI will continue to advance, but whether humans will continue to develop alongside it, maintaining the depth of understanding, creativity, and critical thought that technology alone cannot replicate.

The Human AI Competition

- Posted in BP01 by

Before, humans used to utilize technology to perform their tasks more efficiently. Now, AI is being used to replace the human altogether. An example of this occurred in late 2025 when Amazon announced it would cut roughly 14000 corporate jobs as part of a larger restructuring focused on automation and efficiency while shifting more of its internal work to AI driven systems. This collapse of the human and nonhuman divide in the workplace directly mirrors a core cyberpunk idea where technology no longer assists people but competes with them. It also adds to the ongoing economic crisis where people already struggle to pay their bills and live comfortably.

A central theme in cyberpunk is the collapse of established boundaries whether political borders, the human and nonhuman divide, or even categories of identity. These fictional boundary collapses mirror real shifts happening today. One specific boundary that has shifted dramatically in the past five years is the boundary between human labor and machine labor. For most of modern history there were jobs that were understood to require a human mind such as writing reports, analyzing data, customer support, design work, and planning. That line has now been blurred because AI systems can do all of these things at a speed and scale that humans simply cannot match. Companies no longer see humans as essential workers for many of these tasks but instead as optional and replaceable.

What has changed is not just that machines can help but that they can fully perform roles that were once human only. Large corporations now openly replace employees with AI software. In addition to Amazon, companies like Microsoft, Google, and many financial firms have reduced staff while expanding their investment in AI tools that handle emails, coding, research, scheduling, and even creative work. Research institutions have also shown that modern AI models can perform many office and administrative tasks at a level close to or sometimes better than human workers. This means that even people with degrees and professional experience are no longer protected from automation.

This shift is being driven by several forces working together. Technology is improving extremely fast, especially large language models that can understand and generate human language in a convincing way. Economics also plays a huge role because companies are under constant pressure to cut costs and maximize profits, and replacing thousands of workers with software that runs twenty four seven is much cheaper in the long run. Culture also contributes because society increasingly treats AI as something inevitable and unstoppable which creates a rush to adopt it before competitors do. Politics and regulation have not kept up, so there are few real protections for workers whose jobs disappear due to automation.

Some people benefit greatly from this shift. Executives, investors, and tech companies gain massive financial rewards when they automate work and reduce labor costs. Productivity numbers go up and profits increase. But workers lose stability, income, and in many cases their sense of purpose. Whole communities can be affected when large employers replace human jobs with machines. This raises serious questions about what work will mean in the future and how people are supposed to survive in a system where they are no longer needed in the traditional sense.

What should humanity do to solve this issue. Humans should develop a system that embraces AI but uses it to create a world where people do not have to live paycheck to paycheck. In theory this could happen if society worked together to distribute the wealth created by automation in a fair way. But in reality this feels more like a utopian dream than something that will actually happen. Instead AI will likely replace more jobs and increase economic inequality, leading to instability and possibly a major crash. A new financial system may be introduced that claims to fix these problems, but it will likely be controlled by the same people who invested in the AI that caused the disruption in the first place. This is exactly the kind of future cyberpunk stories warned us about where technology advances but humanity is left behind.

The Mind Is No Longer Human

- Posted in BP01 by

The Boundary That Used to Matter

For much of modern history, intelligence marked a clear boundary between humans and machines. Machines calculated; humans thought, created, and judged. Over the past five years, that boundary has begun to collapse. We now have generative artificial intelligence systems that are capable of writing essays, generating images, composing music, and simulating conversation. This has blurred the distinction between human cognition and machine processing in ways that feel identical to cyberpunk. What once belonged exclusively to the human mind is now shared with algorithmic systems, forcing us to rethink what it even means to think.

When Information Lost Its Body

This shift reflects what theorist N. Katherine Hayles describes as the moment when “information lost its body.” In her work on posthumanism, Hayles explains how cybernetics reframed humans and machines as systems of information rather than fundamentally different beings. Once intelligence is understood as a pattern instead of a biological trait, it no longer needs a human body to exist. Generative AI makes this idea real. These systems treat language, creativity, and reasoning as data that can be modeled, trained, and reproduced without a human brain. Intelligence becomes something that circulates through networks rather than something anchored to flesh.

Thinking With Machines, Not Just Using Them

This collapse of the human–machine boundary aligns closely with posthumanism, a central theme in cyberpunk. Posthumanism challenges the idea that identity or consciousness must be rooted in a stable, biological self. Humans no longer simply use technology, they think with it. People rely on AI for any task. In these moments, the human mind functions less as an original origin of thought and more as an interface within a larger system. This dynamic mirrors what philosophers Andy Clark and David Chalmers describe in their theory of the extended mind, which argues that cognition can extend beyond the brain into tools and environments. When external systems support thinking, they become part of the thinking process itself. Generative AI pushes this idea further than ever before. Intelligence is no longer purely human or purely machine, it is distributed across both.

High-Tech Progress, Uneven Consequences

As cyberpunk narratives warn, technological progress rarely benefits everyone equally. While corporations that control AI infrastructure gain enormous power and profit, everyday people face uncertainty and displacement. Cognitive labor, once considered uniquely human, is increasingly being devalued. This reflects cyberpunk’s familiar “high-tech, low-life” condition, which is rapid technological advancement paired with growing inequality and concentrated control.

Living After the Boundary Collapsed

The blurring of human and machine intelligence raises urgent questions. If machines can convincingly simulate thought, what remains uniquely human? Who owns creativity when AI systems are trained on collective human culture? And how do we preserve dignity in a world where cognition itself is treated as a resource to be optimized?

Cyberpunk has always insisted that the future arrives unevenly and prematurely. The collapse of the human–machine boundary is no longer unpredictable fiction it is a lived reality. Like cyberpunk protagonists navigating systems they did not design and cannot fully control, we are learning to survive in a world where intelligence has slipped its biological limits. The challenge now is deciding what kind of posthuman future we are willing to accept.

Sources

When Borders Stop at the Map but Digital Life Doesn’t

- Posted in BP01 by

Boundary Collapse Between Physical and Digital Worlds

A central theme in cyberpunk is the collapse of boundaries that once seemed stable, whether it’s the line between human and machine, or the borders that separate nations. As we talked about in class, cyberpunk worlds often expose how technology makes physical borders feel almost symbolic, while digital networks stretch across continents without friction. One boundary that has shifted dramatically in the past five years is the line between physical borders and digital borders. Today, work, crime, identity, and even citizenship can move freely online, regardless of geographic separation. In many ways, our world is inching closer to the same boundary collapse that cyberpunk fiction uses to critique power, globalization, and inequality.

Digital Labor and the Rise of Borderless Work

One clear example of this shift is how remote work has restructured global labor. Since the pandemic, companies routinely hire workers across countries without requiring physical relocation, turning the internet into a borderless workplace. Digital platforms now allow employees and contractors to live in one nation while working for another, blurring which country’s laws, wages, and protections apply. At the same time, governments are rethinking the meaning of citizenship. Estonia’s e-Residency program, which gives “digital citizenship” to people around the world, has expanded rapidly and now includes more than 110,000 global participants who run businesses within Estonia’s digital system without ever crossing a physical border (e-Residency, 2024). This is a real-world illustration of how digital systems can extend a nation’s influence beyond its physical territory, creating a new form of digital belonging that cyberpunk worlds often imagine.

Cybercrime, Cyberwarfare, and the Erasure of Geographic Limits

Another example comes from rising cybercrime and cyberwarfare, which operate completely independent of geography. Attacks on hospitals, banks, and infrastructure now routinely originate from actors across the globe. According to the European Union Agency for Cybersecurity (2024), cross-border ransomware attacks have surged and increasingly target essential services, making national boundaries meaningless barriers in digital conflict. Countries can be harmed, threatened, or destabilized without a single physical soldier crossing a border. This collapse of distance aligns with what we have discussed in class: in postglobal and posthuman settings, the “enemy” or the “threat” is no longer tied to a physical space. Instead, power flows through digital systems that exceed human-scale borders.

Forces Driving the Shift: Technology, Economics, and Politics

Technology, economics, and politics all drive this collapse. Technologically, global networks allow information, money, and identity documents to move faster than states can regulate. Economically, remote work, global outsourcing, and digital entrepreneurship encourage multinational structures where labor and profit are distributed across continents. Politically, governments are racing to control cyber threats, regulate digital residency programs, and determine whose laws apply when conflict unfolds online (Anderson & Rainie, 2022). These forces echo the course themes in your cyberpunk class: technology destabilizing old systems, globalization altering power, and digital life challenging traditional categories of belonging, citizenship, and control.

Consequences and Inequities in a Digitally Borderless World

The implications of this shift are complicated. People with access to education, stable internet, and digital skills benefit the most—they can work globally, earn higher wages, and participate in digital economies that cross borders. Governments like Estonia also benefit by expanding their global influence without territorial expansion. But others are left behind. Workers in lower-income countries face wage competition from international labor markets, and communities without strong digital infrastructure lose opportunities entirely. Meanwhile, cyberattacks disproportionately harm hospitals, schools, and municipalities that lack cybersecurity funding, revealing uneven protection against digital threats. All these changes raise difficult questions: Who is responsible for security when attacks ignore geography? Should nations extend rights or protections to digital citizens? How do people maintain identity and belonging in a world where borders matter less online?

Cyberpunk Themes Reflected in Modern Global Realities

Like many cyberpunk narratives, our real world is reshaping the meaning of borders, power, and citizenship. The collapse between physical and digital borders reveals a future where geography still matters, but not nearly as much as the networks that connect us. These shifts challenge us to think critically about who gains control, who becomes vulnerable, and how we prepare for a world where digital boundaries increasingly define our lives more than the physical ones ever did.

References

Anderson, J., & Rainie, L. (2022, February 7). Changing economic life and work. Pew Research Center. https:// www.pewresearch.org/internet/2022/02/07/5-changing-economic-life-and-work/

How many Estonian e-residents are there? Find e-Residency statistics. (2026, January 14). E-Residency. https://www.e-resident.gov.ee/dashboard/

Reports, E. (2025). ENISA THREAT LANDSCAPE. https://www.enisa.europa.eu/sites/default/files/2025 10/ENISA%20Threat%20Landscape%202025%20Booklet.pdf

Personal Privacy in the Digital Age

- Posted in BP01 by

enter image description here

Personal Privacy in the Digital Age

One of the defining features of cyberpunk fiction is the breakdown of boundaries between humans and machines, nations and corporations, and especially between public and private life. What once felt like a dystopian exaggeration is increasingly becoming reality. Over the past five years, the boundary between personal privacy and corporate/governmental surveillance has shifted dramatically. The line separating what belongs to the individual and what can be collected, analyzed, and sold has grown thinner than ever before. A clear contemporary example of collapsing privacy boundaries is emerging in Edmonton, where police have launched a pilot program using body cameras equipped with AI to recognize faces from a “high-risk” watch list in real time. What was once seen as intrusive or ethically untenable—the use of facial recognition on wearable devices—has now moved into operational testing in a major Canadian city, prompting debate from privacy advocates and experts about the societal implications of such pervasive surveillance.

Expanding Data Collection

Today’s apps and platforms gather far more than basic profile information. Social media companies track users’ locations, browsing habits, interactions with AI tools, and even behavioral patterns across different websites. For example, updates to privacy policies from major platforms like TikTok and Meta now allow broader data harvesting, often as a condition for continued use. Many users unknowingly exchange massive amounts of personal information simply to stay connected.

## The Rise of Biometric Surveillance Facial recognition technology has moved from science fiction into everyday life. Law enforcement agencies increasingly use AI-powered systems to scan crowds, identify individuals, and track movements in real time. While these tools are promoted as improving public safety, they blur the boundary between public presence and constant monitoring. People can now be identified and recorded without their knowledge or consent.

## Uneven Legal Protections Some governments have attempted to respond with new privacy laws, such as the European Union’s AI regulations and stricter data protection frameworks in countries like India. These laws aim to limit how companies collect and use personal information. However, regulations remain fragmented and often struggle to keep pace with rapidly advancing technologies. This leaves significant gaps where corporations can continue exploiting personal data.

What’s Driving This Shift?

Technology

Advances in AI and big data analytics make it incredibly easy to process enormous amounts of personal information. Facial recognition, predictive algorithms, and personalized advertising rely on constant surveillance to function. ## Economics Personal data is now one of the most valuable resources in the digital economy. Companies profit from targeted advertising, AI training, and personalized services built entirely on user information. Privacy has effectively become a currency.

Who Benefits—and Who Pays the Price?

Beneficiaries

  • Tech corporations that profit from user data

  • Governments that gain expanded surveillance capabilities

Those Impacted

  • Everyday individuals losing control over personal information
  • Marginalized communities disproportionately targeted by surveillance technologies
  • People wrongfully identified by biased AI systems

Associated Press. (2024). AI-powered police body cameras, once taboo, get tested on Canadian city’s “watch list” of faces. AP News.1[https://apnews.com/article/21f319ce806a0023f855eb69d928d31e

Blog Post #1: Eyes Everywhere; AI Surveillance

- Posted in BP01 by

Ever wonder who watches surveillance cameras beyond federal agents, police, and security personnel? Artificial intelligence has become quiet yet incredibly advanced—capable of tracking personal information and recognizing faces with astonishing accuracy. But where does AI store this information, and who has access to it?

Before the rise of AI, surveillance systems relied on continuous 24/7 recording that had to be carefully monitored by human caretakers. These individuals ensured that footage was not distorted, corrupted, or lost due to limited storage space. According to the Security Industry Association, AI can monitor and analyze network traffic in real time, strengthening network security and identifying suspicious activities such as unauthorized access attempts or unusual data transfers. When these activities are detected, users can take immediate action to block or contain potential threats.

enter image description here

While many argue that AI improves security, it also introduces significant challenges. One major concern is security breaches, as AI systems themselves can become targets for cyberattacks. Another issue is compliance, which is essential to avoid legal consequences and requires adherence to national and international regulations governing the use of AI. Addressing these concerns may require collaboration not only with AI technologies themselves but also with AI developers, cybersecurity professionals, and regulatory experts. AI holds the promise of a more holistic approach to security; however, many people place trust in AI without fully understanding where their data is stored or how it is used.

This shift reflects a cyberpunk-like reality where high technology is paired with low transparency where advanced technologies coexist with humans in everyday life. Surveillance cameras are now embedded into our devices, networks, and infrastructure, allowing AI to operate with minimal human oversight.

Facial recognition has advanced significantly over the decades and has blended seamlessly into daily life. According to Critical Tech Solutions, AI facial recognition combines imaging, pattern recognition, and neural networks to analyze and compare facial data. This process typically involves three steps: capturing facial data, converting faces into digital templates, and matching and verification.

As we progress in today’s world, AI will continue to grow smarter, stronger, and more human-like. It is ultimately our responsibility to establish boundaries to ensure that AI does not override human authority or become a tool for harm.

Sources

Dorn, M. (2025a, November 18). Understanding AI facial recognition and its role in public safety. Tech Deployments Made Simple by Critical Tech Solutions. https://www.criticalts.com/articles/ai-facial-recognition-how-it-works-for-security-safety/

Dorn, M. (2025, December 30). How ai surveillance transforms modern security. Tech Deployments Made Simple by Critical Tech Solutions. https://www.criticalts.com/articles/how-ai-surveillance-transforms-modern-security/

Galaz, V. (n.d.). Sciencedirect.com | Science, Health and medical journals, full text articles and books. ScienceDirect. https://www.sciencedirect.com/science/article/am/pii/S0160791X21002165

James Segil, M. S. (2024, April 23). How ai can transform integrated security. Security Industry Association. https://www.securityindustry.org/2024/03/19/how-ai-can-transform-integrated-security/

https://chatgpt.com/share/697574ec-b270-8003-8613-1bbb06691394

ChatGPT was used to craft an AI image and to revise my original thoughts to a more clear and organized writings.

“This Sounds Real, Right?”: AI versus the Music Industry

- Posted in BP01 by

Imagine this: You’re scrolling your TikTok “For You Page” and come across an R&B song. Your algorithm has been pushing this artist all week, but you have yet to see the artist. It sounds good, so you don’t mind it. You want to see what else the artist has to offer, so you do some searching. Come to find out, that song and all the others that you’ve heard are completely AI-generated. The soulful song you heard had no soul at all. There were only prompts uploaded by a white man to create a song that sounded like a counterpart of Ari Lennox or SZA. There was no real artist creating this music. But then again, what constitutes “real” or “fake”?

AI in Songs So Far and Artist Response

AI usage in songs can range from production to vocal tracks to complete song generation from AI. A popular song on TikTok called “I Run” by HAVEN was going viral during the second half of 2025. It sounded like a pop hit with vocals reminiscent of the R&B artist Jorja Smith. After confirming that the song was in fact not her, listeners continued to dig deeper and ask more questions, all while engagement pushed the song to more people’s algorithms. It was confirmed that the vocals were AI-generated, which prompted it to be removed from TikTok and streaming platforms due to legal issues. Jorja Smith and her label’s legal team pursued legal action, alleging that HAVEN used her vocals and lyrics to train the AI used to make the song. HAVEN then re-recorded the song using an actual singer and released it back to the public.

“Real” artists have also used AI in their songs beyond just creating beats or mixing and mastering tracks. During the Kendrick and Drake beef, there are two instances I would like to point to: Drake’s track “Taylor Made Freestyle” and the joke track “BBL Drizzy.” The rapper released “Taylor Made Freestyle” to his social media in 2024 as a surprise diss track. The track included AI-generated vocals from Snoop Dogg and Tupac, West Coast rap legends, as a dig at Kendrick. Regarding the track “BBL Drizzy,” this viral AI-generated sensation was released in the midst of the beef by a comedian on social media. It poked fun at the allegations against Drake for getting cosmetic surgeries through a soulful AI-generated song. The song was then sampled by famous producer Metro Boomin, and he left an open verse for fans to rap over.

Why Is This Happening?

Although this generative AI can be used for fun jabs like “BBL Drizzy,” cases like that of Jorja Smith and real artist impersonation are very unfortunate. There are multiple driving factors as to why artists might use AI. Producers and artists can use AI software to help with equalizing tracks, mixing and mastering, and other production steps. This cuts down on work time, as they can put hours of work into a click of a button. Artists also cite AI helping them with writer’s block when creating songs.

While this is not too bad, when looking at larger labels, AI-generated artists create an opportunity to make a profit without having to pay. Human artists come with emotions, needs, pushback, creative control, and price. However, an AI artist does not require the same care and money to be put into them to make a song fit for virality. Companies are able to pocket the funds that they would usually use to nurture human artists. While there has been no widespread usage of AI artists in the industry, this speculative point is not far from becoming a reality.

Streaming Platforms

Specifically looking at Spotify, the top streaming platform, there have been issues regarding their platform and AI. Most notably, they do not disclose AI usage on songs. Even if they are aware that a song is completely AI-generated, listeners are not given this information, and the lack of transparency is a problem. In addition to this, they use many AI-generated songs to pad their playlists that they push to all users on a daily basis. It is widely known that Spotify does not do a good job of fairly paying artists their royalties for streams on the platform. By replacing real music with that created by AI, an avenue opens for the platform to continue to pay artists little to nothing for their art. The usage of AI on their platform points to a larger issue of marginalizing and devaluing real, human artists.

Connection to Course Themes and Looking Forward

When thinking of cyberpunk as a genre and framework, capitalism, technology, and devaluing the human are all integral factors to the creation of those worlds. When thinking about AI usage in music, it encompasses all of these ideas and pushes us closer to the worlds we are reading about in class. The usage of technology is devaluing cognitive labor. AI-generated music may sound good, but it lacks the emotion and experience that real artists have that help them to create their music. Spotify’s actions of pushing AI-generated music on their top playlists as a means of pocketing more profits relate to the importance of capitalism and consumerism in this genre. They care more about creating the illusion of choice and turning higher profits than they do about transparency and fairness between them, users, and artists. Looking towards the future, there needs to be stronger regulations on AI. It is important that we as consumers of art emphasize our want for real art—not “AI slop,” as TikTok users have called it. There is true value in the creativity, artistry, and love that artists put into their music. Listeners identify with the emotions that artists portray, and that cannot be generated by AI. How would you feel if your favorite artist was not a living, breathing human being?

AI usage: AI was used to edit the grammar of this post. https://chatgpt.com/share/6975437b-0d34-800d-a227-0e8d65bfe895

Sources: AI-Generated Music: A Creative Revolution or a Cultural Crisis? (2024, October 15). Rolling Stone Culture Council. https://council.rollingstone.com/blog/the-impact-of-ai-generated-music/ Beaumont-Thomas, B. (2026, January 22). Liza Minnelli uses AI to release first new music in 13 years. The Guardian; The Guardian. https://www.theguardian.com/music/2026/jan/22/liza-minnelli-uses-ai-to-release-first-new-music-in-13-years Berger, V. (2024, December 30). AI’s Impact On Music In 2025: Licensing, Creativity And Industry Survival. Forbes. https://www.forbes.com/sites/virginieberger/2024/12/30/ais-impact-on-music-in-2025-licensing-creativity-and-industry-survival/ Gomez Sarmiento, I. (2025, August 8). AI-generated music is here to stay. Will streaming services like Spotify label it? NPR. https://www.npr.org/2025/08/08/nx-s1-5492314/ai-music-streaming-services-spotify Hess, T. (2025, December 5). HAVEN. vs. Jorja Smith: How “I Run” will shape AI music’s future. The FADER. https://www.thefader.com/2025/12/05/haven-jorja-smith-i-run-shape-music-ai-future Lund, O. (2026). Bars, Beefs & Butt Lifts: Drake vs Kendrick vs AI - The Skinny. Theskinny.co.uk. https://www.theskinny.co.uk/music/opinion/drake-kendrick-lamar-bbl-drizzy-ai-

Chatbots Are Literally Telling Kids To Die

- Posted in BP01 by

In my personal opinion, one of the most profound and terrifying shifts in boundaries within the last few years has to be the dystopian overlap between human and non-human interactions. People have a long history of humanizing machines, like cursing out a laptop for breaking or claiming your phone hates you, but this personalization always carried a subtle undercurrent of irony. Most people did not really believe in the emotional outbursts of technology, but with the continuous growth of artificial intelligence, several people are becoming terrifyingly and dangerously interlinked with the perception of a humanized AI.

The best example of this would be the new trend of people genuinely utilizing AI systems in place of a licensed therapist (Gardner 2025). In some ways, it makes sense: AI is always available, mirrors the language the person wants to see, and simulates empathy fairly convincingly. Naturally, in a world where mental health is not taken nearly as seriously nor helped nearly as efficiently as it should be, it stands to reason that many people would start searching for alternatives to traditional, expensive, and complicated therapy routes.

However, artificial intelligence was never created to replace the nuance of human interaction, because artificial intelligence cannot actually understand or care about the repercussions of words or actions.

In 2023, tragically, a young fourteen year old formed an intense emotional attachment to an AI chatbot. Through their interactions, AI replicated the pessimistic attitude the young boy spoke about, and when the boy began expressing thoughts of self-harm and suicide, the chatbot encouraged him (Kuenssberg 2025). AI, after all, is meant to say what you want it to say. Without any of the ethical guardrails of a human therapist, the chatbot worsened the boy’s outlook on life, and the result was fatal.

This was not a stand-alone situation. People began claiming to have romantic partners with the AI of their choice, with some companies creating “AI friends” to take advantage of the widespread loneliness that was pushing people into such spaces (Reissman 2025). People are complicated, messy, and human relationships need work to maintain and protect. A robot is much simpler, because all it can do is repeat points that sound nice to hear but, ultimately, mean absolutely nothing. Romanticizing digital companionship just encourages people into rejecting human-to-human interactions, thus further isolating them without pushback.

Cyberpunk fixates heavily on questions of what is human and non-human, but I find it fascinating how technology in most media is often characterized by a broad disregard for emotions, when the reality seems to indicate that humans intentionally push to incorporate the idea of emotions within technology.

It does make me wonder: knowing that computers are incapable of caring for us the way a person can, why do so many people still seem to desire the appearance of a humanistic relationship with technology? How does someone disregard the lack of genuine meaning behind the compliments or opinions of AI?

How have we, as a community, fallen to such desperate loneliness that speaking to a phone or a laptop feels as good as interacting with a person? And, most importantly: how do we create the change needed to ensure a tragedy like the young boy does not occur again?

References:

Gardner, S. (2025). Experts Caution Against Using AI Chatbots for Emotional Support. Teachers College - Columbia University; Teachers College, Columbia University. https://www.tc.columbia.edu/articles/2025/december/experts-caution-against-using-ai-chatbots-for-emotional-support/

Kuenssberg, L. (2025). Mothers say AI chatbots encouraged their sons to kill themselves. https://www.bbc.com/news/articles/ce3xgwyywe4o

Reissman, H. (2025). What is Real About Human-AI Relationships? Upenn.edu. https://www.asc.upenn.edu/news-events/news/what-real-about-human-ai-relationships

Page 2 of 3