Identity 2.0: When Your Face Becomes Your Passport, Wallet, and Citizenship

- Posted in Uncategorized by

enter image description here

In a cyberpunk world, identity isn’t just who we are—it’s what corporations and governments can verify, commodify, and control. Today, the boundary between physical identity and digital identity is eroding. What once was a legal document in a wallet is now a constellation of biometric scans, mobile IDs, and digital wallets that follow us everywhere we go. This isn’t tomorrow’s speculation—it’s happening now.

The Boundary That Has Shifted

Historically, identity was rooted in the physical: passports, birth certificates, social security cards. In the digital age, identity became credentials we entered online—usernames, passwords, PINs. But in 2025 digital identity systems are increasingly biometric, mobile, and machine-readable, blurring the line between who you are and what a machine recognizes you as.

Governments and corporations are building systems that link your face, fingerprint, voice, or palm directly to essential services like travel, banking, healthcare, and even public benefits. The European Union’s eIDAS 2.0 initiative is creating a digital identity wallet usable across all member states, promising convenience but also redefining what it means to prove who you are in a digital society.

Meanwhile, biometric techniques—once exotic—now fuel everyday authentication. From palm biometrics in stores and hospitals to mobile IDs on a phone, the move toward identity tied to our bodies rather than passwords is accelerating.

What’s Driving the Shift

Technological forces: Biometric systems and mobile identity standards have improved dramatically. Industry reports show passwordless authentication increasingly replacing traditional login methods, with biometrics offering convenience and security advantages—at least superficially.

Economic incentives: Tech companies and governments alike see huge value in digital identity platforms. They reduce fraud, streamline services, and open doors to new monetizable data streams. No database is just for ID anymore—it’s also a goldmine for behavior, spending patterns, and social metrics.

Political and social pressures: The push for digital identity isn’t just consumer convenience. Governments argue it enhances security, prevents fraud, and enables digital citizenship in an era of global mobility. But critics warn that once biometric identity systems become ubiquitous, opting out becomes increasingly difficult.

How This Connects to Cyberpunk

Cyberpunk fiction vividly illustrates worlds where identity is mutable, encoded, and monitored by systems beyond individual control. In Neuromancer or Snow Crash, identity chips, corporate databases, and neural codes make every person traceable and manipulable. Today’s digital identity systems reflect that logic: your face, your palm, your biometric signature becomes a node in a global network, shaped by technical architectures and power structures.

Cyberpunk theory teaches us to see how technologies don’t merely serve users but also reshape social relations. The transition to biometric, mobile IDs recasts identity itself as something processable, shareable, and surveilled—no longer purely personal, but infrastructural.

Who Benefits—and Who’s at Risk?

Potential benefits:

  1. Faster border crossings and secure travel documentation.
  2. Passwordless security that reduces traditional cyber-attacks.
  3. Access to services for people without traditional documentation.

Risks and harms:

  1. Surveillance and privacy erosion: Biometric systems can track movements across spaces, linking online and offline behaviors in ways never before possible.

  2. Exclusion and inequality: Individuals without compatible devices or digital literacy risk being shut out of essential systems.

  3. Permanent identifiers: Unlike passwords, biometric traits cannot be changed. If compromised, your faceprint or fingerprint is compromised for life.

These concerns echo fundamental cyberpunk anxieties about surveillance, agency, and control. When identity becomes a data point indexed and algorithmically processed, the human subject transforms into a profile—a mathematical object to be scored, categorized, and predicted.

Ethical Questions We Must Ask

Consent or coercion? When a digital ID is required for basic services, can consent truly be voluntary?

Who controls your identity? Is it a corporate cloud, a nation-state database, or the individual themselves?

What happens when borders are digital rather than physical? There’s a powerful allure to seamless global identity—but also a danger of borderless surveillance.

Understanding the collapse between physical and digital identity is urgent because it affects every person with a smartphone, a passport, or an online presence. The question isn’t whether identity is changing—but whether we will shape that change or be shaped by it.

APA-Style References

European Commission. (2025). eIDAS 2.0 digital identity wallet framework. TRUSTECH. https://www.trustech-event.com/en/event/news/digital-identity-trends-2025

Akhison, G. (2025). Towards a universal digital identity: A blockchain-based framework for borderless verification. Frontiers in Blockchain. https://www.frontiersin.org/journals/blockchain/articles/10.3389/fbloc.2025.1688287/full

Demystify Biometrics. (2025). Biometrics & digital identity: Top 5 trends. https://www.demystifybiometrics.com/post/march-2025-biometrics-digital-identity-top-5-trends

Digital identity in 2025: biometric wallets and privacy dilemmas. (2025). RTechnology. https://rtechnology.in/articles/1050/digital-identity-in-2025-biometric-wallets-and-privacy-dilemmas

Le Monde. (2025, September 1). The discreet rise of facial recognition around the world. https://www.lemonde.fr/en/pixels/article/2025/09/01/the-discreet-rise-of-facial-recognition-around-the-world_6744911_13.html

Strathmore University CIPIT. (2024). Global biometric and digital identity trend analysis (Global Report). https://cipit.strathmore.edu/wp-content/uploads/2024/05/Global-BDI-Trend-Analysis-Geographical-Assessment-Final-Approval-06.09.2023-compressed.

OpenAI. (2026). Digital identity and biometrics in everyday life [AI-generated image]. https://www.openai.com/dall-e

That Wasn’t Me

- Posted in BP01 by

enter image description here

Intro

With the increase of technological abilities arrives new evils. Deepfakes are AI generated images, videos, or audio that make people appear to say and or do things that never actually happened. Deepfakes used for the purpose of producing pornographic content is especially dangerous . These harmful images and audios transcend any singular country. This problem is worldwide and is growingly difficult to contain without violating any rights or banning technology completely. Deepfake technology is capable of making content based on a description as well as curating images of a specific person of your choosing doing actions based on your own fruition as well. Deepfake technology heavily relies on artificial neural networks where computer systems recognize patterns in data. These neural networks feed images and videos and are essentially “trained” to dissect it and replicate those same patterns. The possibilities are endless and hard to contain, thus making the dangers and impact insurmountable.

Breakdown

When we take a step back and examine deepfake we have to consider who these harmful videos benefit. For starters the tech companies that make it possible for deepfakes to be generated are indirectly benefiting. An increase in deepfakes leads to an increase in the demand for AI tools, causes more platform engagement, and ultimately ends in a substantial economic benefit by making them more money. Aside from the tech companies, the users benefit. The users get to see content with their person or people of choice without having to work out the logistics of making their dreams a reality. They can see their favorite celebrities, friends , neighbors, or even coworkers in 18+ materials in the drop of a dime. Additionally we can peel back another layer and the people creating this content can in return potentially blackmail and extort their victims by threatening to release the content. Not only do the victims of these contents suffer but the increase of misinformation affect societies ability to trust digital images. enter image description here Questions

As deepfake technology continues to become more advanced it poses a serious threat and evokes us to think of current and future repercussions. For instance how can we as humans accurately decipher AI generated content from real content? If 18+ material can be made so easily, what's to stop content creators from targeting children, and what does that mean for rates of sexual crimes committed against children for the future? What's to stop people from claiming that real content is AI generated?Also as we see the damage this technology is capable of dealing, how do we begin to regulate harm without having to ban technology as a whole?

Statistics

In the article Social, legal and ethical implications of AI-Genrated deepfakes pornogrpahy on digital platforms: A systematic literature review, researchers conducted a study to see the statistical findings of how big of an impact deepfake technology has on our society. Research showed that from 2019 to 2023 there has been a 550% increase in deepfake videos. Of that, 99% were of pornographic nature, and within that 99%, 98% of the videos produced were depicting content of women and young girls. These findings indicate a clear pattern of gender based targeting. The curation of 18+ material using AI has a heavy impact on its victims. Many women within this study were found to have suffered deep psychological trauma leaving side effects of anxiety and emotional distress, which is exacerbated as the content is spread onto platforms that are difficult to regulate and control. No matter the social status of the victim, deepfakes have the potential to harm not only the person's public image, but also their careers. enter image description here Counteract

As difficult of a problem deepfakes are to tackle, there have been attempts to contain and reduce these cyber crimes. In May of 2025 President Donald Trump signed the Take It Down Act. This law was created to enact stricter penalties for the distribution of deepfakes, as well as revenge porn and other non consensual 18+ content. The fundamentals behind the act is that if a victim contacts a platform to which their deepfake content has been posted on, the platform has 48 hours to take it down and take steps to erase all duplicates as well. The penalty for failure to take down the material is mandatory restitution and criminal penalties, including prison, a fine or both.

Connection

Deepfakes can be linked to cyberpunk because we have described technology dynamics within our society. We've discussed corporations overriding ethics and technology exploiting bodies through high tech, low life principles. As well as identity becoming fragmented and commodified. More specifically Deepfakes can be connected to the second industrial revolution. Just as the second industrial resolution produced automation and new technologies that fundamentally changed how images were produced and distributed, deepfakes represent a modern version of those same principles. In the second industrial revolution machines relied on human labor, these deepfake technologies still need to rely on a human creator to prompt them. Both the second industrial revolution and deepfake technology demonstrated a technological shift which led to questions about authenticity and control over identity.

Sources

Furizal, F., Ma’arif, A., Maghfiroh, H., Suwarno, I., Prayogi, D., Kariyamin, K., Lonang, S., & Sharkawy, A.-N. (2025). Social, legal, and ethical implications of AI-Generated deepfake pornography on digital platforms: A systematic literature review. Social Sciences & Humanities Open, 12, 101882. https://doi.org/10.1016/j.ssaho.2025.101882
AP News. (2025, April 29). President Trump signs Take It Down Act, addressing nonconsensual deepfakes. What is it? AP News. https://apnews.com/article/take-it-down-deepfake-trump-melania-first-amendment-741a6e525e81e5e3d8843aac20de8615
U.S. Government Accountability Office. (2020, October 20). Deconstructing deepfakes—How do they work and what are the risks? U.S. GAO WatchBlog. https://www.gao.gov/blog/deconstructing-deepfakes-how-do-they-work-and-what-are-risks
TAKE IT DOWN Act, S. 146, 119th Cong. (2025). Congress.gov. https://www.congress.gov/bill/119th-congress/senate-bill/146

Is It Reality Or Parasocial Perception

- Posted in BP01 by

As the world progresses and continues to incorporate technology into the everyday weaving of life, it’s natural to question the consequences of such advancement and how they shape our moral, social, and political parameters. One particular facet of technology that I feel has done irreparable damage, is social media. It truly is a double edged sword; on one hand it allows everyone the chance to voice their opinions and be heard, a necessity in regards to the furtherance of rights and safeties for marginalized groups. On the other hand, social media gives everyone the ability to voice their opinions and be heard, which obviously includes those who oppose civil justice being delivered to people who are different from them. Social media has expanded the way that we interact with one another, allowing us access to people who we would otherwise never see or interact with. Furthermore, social media has given most a false sense of confidence and a confusion in which they believe that just because they can do something, it means they should and are entitled to do so. Thus we have an expedited emergence of parasocial relationships that tend to push stereotypical agendas in regards to race, gender, sexuality, etc.

A parasocial relationship refers to a one-sided tether that one creates, in which they devote their energy and emotions to someone who doesn’t even know they exist, to put it simply.(Cynthia. Hoffner, 2022) A lot of times we see parasocial relationships form between celebrities and their fans with stan culture being a prime example. I want to preface that there is nothing wrong with admiring an artist or their work. However, it has gotten to a point in which people genuinely believe they know the ins and outs of a person they have never met. Oftentimes, this is because the person is constantly being fed posts and other forms of information about whatever interests them, by the algorithm. So even though they’ve most likely never met the person they admire, they are in constant contact with them.

With that said, it is important to understand that parasocial relationships transcend the dynamic between mega fan and idol; it also applies to those who consume 30 second media and then apply what they see to entire marginalized groups. In our current political climate paired with social media usage, it does not take much to inspire discrimination towards certain groups of people. All it takes is for a certain demographic to see a 30 second clip of young black students dancing and celebrating at their graduation, to stimulate the misuse of the term black fatigue. A term that was initially created to encompass the vast stress and exhaustion that comes with living in a country that champions systematic racism, had become a discriminatory term against the very community that created it. All it takes, is one conversation discussing the liberties that immigrants deserve and the injustices they face, to breed negativity from those who are ignorant and hateful. These interactions then begin to seep into the physical of everyday life. It’s not just comments under a tiktok post, but micro and macro aggressions in public, police brutality, ICE raids. Social media allows for the circulation of stigma that encourages people to stay true to their biases and never confront their prejudice. Thus the cycle continues. And we still use social media. In a way it kind of reminds me of the ‘bad faith'(Jean-Paul Sartre.) The idea of critiquing social media but still somehow glamorizing it, just how cyberpunk critiques aspects of society such as corporate capitalism while still being interested in the aesthetic of it all.

Assisted Intelligence: Are We Losing Skills in the Age of AI?

- Posted in BP01 by

In the past five years, the boundary between human competence and machine-assisted performance has shifted. As a society, we are moving to a world where people can assume a coat of knowledge simply due to their ability to input a prompt into program rather than developing their own skills. This idea raises a pressing question: is society’s competence declining as the influence of technology increases or is technology reshaping what is required of humans to be successful.

Examples of this shift is evident in our scholarly institutions, professional work, and the field of creatives. We have AI-derived tools like ChatGPT that can produce an answer to almost any response to any prompt submitted through its website—even moving to partnerships with Meta, Google, and other tech giants of the world. While initially, these programs were utilized as potential solutions to repetitive or more time-consuming tasks so that humans can focus on the creative and decision-making aspects. In the earlier days of AI, we have programs like Grammarly that helped students, teachers, creatives, and other professionals formulate their creative writing by checking for punctuation, verb tenses, and sentence re-phrasing. These features saved times on millions of pieces, offered help to writers and reduced errors in writings. Early AI systems mainly offered support with a large emphasis on clarity and correctness—leaving the content development to the human and refining to the AI tool.

However, as AI systems progressed and emerged as a widely accessible tool that could not only create but also produce products that required in-depth thinking and knowledge, users quickly began to rely on the application to produce these products rather than use it for its initial use. Students have begun submitting completely AI-generated papers and assignments. Pre-professionals use AI to draft their emails, business reports, resumé, and applications. We are having marketing team designers typing a prompt into an AI tool to produce pictures and videos of their work rather than mastering their own software skills. Now we are being questioned as a generation, primarily Gen Z and beyond, do we truly know how to do anything without the aid of the internet? While many Gen Z employees report that AI tools help them work faster and feel more capable, research suggests that heavy reliance on these systems may come at the cost of developing interpersonal and communication skills that technology cannot easily replace, pointing to a gap between perceived efficiency and well-rounded professional competence (Robinson, Forbes). Ultimately, the shifting boundary between human competence and machine-assisted performance reflects more than just technological advancement; it reveals a cultural turning point in how we define skill, knowledge, and effort. AI is not inherently a threat to human ability, but our relationship with it determines whether it becomes a tool for empowerment or a “handicap” that weakens essential cognitive and interpersonal skills. Like many technologies before it, AI forces society to adapt, but the pace of this change leaves little time to reflect on what might be lost in the process. Cyberpunk ideas have warned of futures where humans become dependent on the very systems they create, blurring the line between enhancement and erosion of identity. Today, that fiction feels less like distant speculation and more like a reflection of our lived reality. The key question moving forward is not whether AI will continue to advance, but whether humans will continue to develop alongside it, maintaining the depth of understanding, creativity, and critical thought that technology alone cannot replicate.

Blog Post 1: AI and the Quit Erosions of Human Cognition

- Posted in BP01 by

I was watching a TikTok video that talked about how youth are using AI for simple tasks. One lady uses AI to generate a grocery list because she "gets confused on what to buy, because there are so many options". This might seem small and ahrmless but it reflects a larger shift happening in our everyday. Artificial Intelligence is no longer helping us with complex issues, it is increasingly being used for brainstorming, organization, therapy, and even decision-making. This raises an important question: What happens to human thinking when machines think for us? The brainworks on a "use it or lose it' principle. The less we use our brains to brainstorm and critically think is the more we lose our ability to generate new ideas and learn and grow as a society. People are surrounded by technology that makes life easier, but also more controlled and less engaging. today AI does not dominate, but it creeps in in a way that we do not realize that we are diminishing our intelligence at a slight inconvenience rather than figuring ourselves out and diminishing our intelligence. It has been shown that cognition happens through the human brain; it is how we make our memories, create experiences, and solve real-world problems. When we leave all the planning and decision-making to AI, the problem is not that AI will become smarter than humans, but humans will cease to function on a cognitive level and stop trusting their ability to operate without technology. enter link description here[enter link description here] (https://httpsmediumcom-at-markaherschbergis-ai-just-a-tool-for-lazy-people-542c29a08020)

America’s Test Dummies

- Posted in BP01 by

Introduction

In the past five years we’ve seen many, and I mean many unprecedented events and choices being made especially when it comes to that related to our bodies. It seems as if since trumps’ inaugural tenure in office (being 2020/ a little over 5 years ago) that the care for quality of life has significantly decreased. Which to me feels like an eventual progression into the development of genetic/bodily augments to improve one’s health.

My Body, Your Rights?

This trend started with the overturning of roe v wade, where we saw the erasure of protection of abortion rights, spurred by a republican regime that often cited religion and cruelty as justification for said decision. Taking one’s autonomy over their own body even when it risks their life. Furthermore many call them out on hypocrisy on said decision as following this there was a shocking lack of care for parents and/or children through the policies they push. Whether it be defunding the department of education or not working to reform the foster care system, two sectors that could become notable/far more severe issues for our country in the future.

Pay or Die

The stricter eligibility rules on healthcare also showcase this, watching as affordable treatment is becoming a mere myth. Negative sentiments on this development can be represented by the Luigi Mangione case, where a man killed a healthcare CEO in broad daylight for similar reasons. It seems as if the country is going backwards when it comes to health and pharmacy, almost as if they’re trying to kill off the less fortunate.

The Downwards Pyramid

My final and most recent example is the hiring of RFK as the secretary of health and human services, who has repeatedly touted seemingly nonsensical claims about the health of Americans and what we need to do to fix those issues, such as his reformed food pyramid that emphasizes meats and dairy which were limited previously due to how much fat they contain.

Conclusion

Clearly all of these examples are political as we see their causes being the upper echelon and or the politicians we have in office, with their chaotic policies and use of power when it comes to influencing how we can use and in theory “should” use our bodies. Furthermore this ties back to Elon Musk and his venture into the world of brain chips. If left unchecked these cyberpunk life forms could become very much real.

This progression of events to me will become a large reason for our species integration with technology. Eventually people will become so unhealthy that it will be necessary for survival and success in everyday life, a dependency that will control our entire lives. They’re looking to use us citizens as lab rats for early body augmentation, and this is the first phase, breaking us down so that it’s needed.

https://www.npr.org/2026/01/07/nx-s1-5667021/dietary-guidelines-rfk-jr-nutrition

https://pmc.ncbi.nlm.nih.gov/articles/PMC9824972/

https://www.nbcnews.com/news/us-news/handgun-silencer-manifesto-items-luigi-mangiones-backpack-arrest-polic-rcna248025

When the Human Body Became a Dataset

- Posted in BP01 by

In cyberpunk stories, explosions and futuristic weapons aren't usually the scariest things. Instead, they show up when well-known lines start to blur. One of the largest boundary changes of the last five years is the blurring of the lines between digital data and the human body. What used to be private, personal, and internal is now always being watched, recorded, and looked at. In today's society, the body makes information instead of just being there. Wearable technologies and apps that help you keep track of your health are now a normal part of everyday life. Devices like Apple Watches, Fitbits, and smartphone health applications keep track of things like your heart rate, sleep patterns, physical activity, oxygen levels, and even your menstrual cycle. These tools claim to give people knowledge and control over their health, and they are often marketed as empowering. This adjustment, though, means more than merely being useful. It shows a change in how people see the body, from seeing it as a lived experience to seeing it as a series of measurements. This change in boundaries happened faster after the COVID-19 pandemic, when digital health monitoring developed quickly. Health data became highly crucial for making decisions about safety, risk, and productivity. At the same time, commercial companies might access incredibly personal biological data. Reporting by NPR has highlighted growing concerns about how period-tracking and health apps may collect, store, and potentially expose sensitive reproductive data, especially in the wake of shifting abortion laws [NPR, 2022]. Taking care of yourself could develop into snooping. Cyberpunk theory can help us figure out why this transition makes us feel bad. Posthumanism regards the human body as a hybrid system interconnected with technology, rather than a static biological boundary. Wearable technology improve perception by turning physiological processes into real-time feedback. Technology can read the body before we do by using vibrations to show a quicker heart rate or an abnormal rhythm. Data is what makes human experience possible. But cyberpunk often stresses that technology needs electricity systems to work. Data is not impartial. Institutions that often care more about making money than helping people collect, analyze, and control it. Your health data can affect your ability to receive insurance, get a job, keep your reproductive health secret, and get resources. In this view, the body is valuable not because of what it has done, but because of what it can do that can be quantified. Globalization makes this boundary disintegration much worse. Health apps work across borders and save data on international servers that have different privacy rules. A person may produce bodily data in one nation, while companies scrutinize and capitalize on it in another. Cyberpunk often shows this: systems that go beyond national borders but people are still affected by them. This change really does have benefits. Early medical detection, keeping an eye on chronic illnesses, and making healthcare easier to get have all saved lives. Health technology gives many users peace of mind and control. But the hazards are not spread out evenly. Communities that are already under more surveillance are generally more at risk when physiological data becomes institutional knowledge. In a society based on data, who really owns the body? This is a key question in cyberpunk. When biological data is turned into a product, personal freedom becomes weak. It gets harder and harder to tell the difference between care and control. In Blade Runner, artificial beings try hard to be seen as more than just things that were made. Today, people have a quieter version of same problem. As our bodies become dashboards, algorithms, and projections, we may be perceived more as data sets than as persons. The breakdown of the line between body and data makes us question if technological advancement can go hand in hand with respect, or whether making ourselves measurable means we are more likely to be controlled.

The Human AI Competition

- Posted in BP01 by

Before, humans used to utilize technology to perform their tasks more efficiently. Now, AI is being used to replace the human altogether. An example of this occurred in late 2025 when Amazon announced it would cut roughly 14000 corporate jobs as part of a larger restructuring focused on automation and efficiency while shifting more of its internal work to AI driven systems. This collapse of the human and nonhuman divide in the workplace directly mirrors a core cyberpunk idea where technology no longer assists people but competes with them. It also adds to the ongoing economic crisis where people already struggle to pay their bills and live comfortably.

A central theme in cyberpunk is the collapse of established boundaries whether political borders, the human and nonhuman divide, or even categories of identity. These fictional boundary collapses mirror real shifts happening today. One specific boundary that has shifted dramatically in the past five years is the boundary between human labor and machine labor. For most of modern history there were jobs that were understood to require a human mind such as writing reports, analyzing data, customer support, design work, and planning. That line has now been blurred because AI systems can do all of these things at a speed and scale that humans simply cannot match. Companies no longer see humans as essential workers for many of these tasks but instead as optional and replaceable.

What has changed is not just that machines can help but that they can fully perform roles that were once human only. Large corporations now openly replace employees with AI software. In addition to Amazon, companies like Microsoft, Google, and many financial firms have reduced staff while expanding their investment in AI tools that handle emails, coding, research, scheduling, and even creative work. Research institutions have also shown that modern AI models can perform many office and administrative tasks at a level close to or sometimes better than human workers. This means that even people with degrees and professional experience are no longer protected from automation.

This shift is being driven by several forces working together. Technology is improving extremely fast, especially large language models that can understand and generate human language in a convincing way. Economics also plays a huge role because companies are under constant pressure to cut costs and maximize profits, and replacing thousands of workers with software that runs twenty four seven is much cheaper in the long run. Culture also contributes because society increasingly treats AI as something inevitable and unstoppable which creates a rush to adopt it before competitors do. Politics and regulation have not kept up, so there are few real protections for workers whose jobs disappear due to automation.

Some people benefit greatly from this shift. Executives, investors, and tech companies gain massive financial rewards when they automate work and reduce labor costs. Productivity numbers go up and profits increase. But workers lose stability, income, and in many cases their sense of purpose. Whole communities can be affected when large employers replace human jobs with machines. This raises serious questions about what work will mean in the future and how people are supposed to survive in a system where they are no longer needed in the traditional sense.

What should humanity do to solve this issue. Humans should develop a system that embraces AI but uses it to create a world where people do not have to live paycheck to paycheck. In theory this could happen if society worked together to distribute the wealth created by automation in a fair way. But in reality this feels more like a utopian dream than something that will actually happen. Instead AI will likely replace more jobs and increase economic inequality, leading to instability and possibly a major crash. A new financial system may be introduced that claims to fix these problems, but it will likely be controlled by the same people who invested in the AI that caused the disruption in the first place. This is exactly the kind of future cyberpunk stories warned us about where technology advances but humanity is left behind.

There is no Private Anymore

- Posted in BP01 by

Security cameras with popular social media platforms on their screens by: Electronic Frontier Foundation#### Introduction

Having privacy used to be personal to yourself where what you did was unnoticed. Within the last five years, that boundary has changed tremendously. Digital surveillance, whether that is through games, apps, cameras, and facial recognition, has made and allowed being constantly monitored very normal on a day to day basis. This shows a core theme of cyberpunk, technology develops faster than its rules and regulations to maintain its ethics.

What Has Changed

Surveillance is not just at the government and law enforcement level. It is with civilians using our phones. An article by Wired talks about TikTok collecting and storing data from users, that includes location and data that can identify specific devices (Wired, 2026). This goes on in the background even when app users are not posting and unfortunately the users do not even know the extent of their data being collected. This blurs lines between sharing the information being shared and being constantly monitoring. Surveillance goes beyond apps, it also includes studying and scanning people’s facial and physical features. Facial recognition is being put in public places that can identify people based on their biometric data. According to an article by ISACA, this brings lots of privacy concerns because biometric information like faces cannot be changed like a password and credit card information. This data can be stored, shared, and used without people even knowing they are being tracked. Additionally, facial recognition data is normally not encrypted so it is easier to be hacked and exploited by criminals (ISACA, 2025). This also weakens the boundary between public spaces and personal privacy.

These show how privacy has to be guarded and protected opposed to it being the default that most people would expect.

What Is Driving This Shift

Several of things are speeding up the growth of surveillance: Technological advancement: AI and facial recognition tools are cheaper, faster, and more accurate.

Financial benefits: Companies make profit from collecting and selling user data, while cities are encouraged to use technology to monitor public places

Social Acceptance: Constant data collection has become expected in exchange for convenience and connectivity.

Weak regulation: Lawmakers struggle to keep up with quickly evolving surveillance technologies.

These agree with cyberpunk’s focus on powerful systems working beyond meaningful public control (ISACA, 2025).

Cyberpunk Connections

A common theme within cyberpunk is a world where people are being constantly watched by powerful companies or governments. This is not just an idea from a movie or a book anymore, but real life. As the idea of privacy goes away, technology gains more power and control over people which turns normal everyday activities into an opportunity for their data to be collected. The posthuman idea is also relevant as people are less defined by themselves, but by their digital footprint and online profiles. These surveillance systems only see people as data and digital points. Who Benefits, Who Is Harmed These surveillance technologies can and do improve security and its efficiency, but also bring about problems and concerns. It has the potential to benefit governments, companies, and civilians, but can also hurt civilians. The more data that is collected, the more control civilians lose. As their information is collected, stored, and shared, the less they can protect it. The chances of this data being leaked and exploited goes up, and the blame typically falls on the user (ISACA). Users are typically unaware of how much of their information is being collected. Therefore, the consequences of surveillance also harms civilians the most.

Conclusion

Digital surveillance is a part of everyday life, but still leaves questions. What or who controls the data collected about civilians? How much privacy do civilians have to sacrifice for convenience and safety? At what point is surveillance not protecting, but controlling? These issues show that society is already living out the ideas of cyberpunk where there is a thin line, if any, between privacy and public.

Sources Ahmed-Adnan-Sheikh, H.(2025, November 13) Facial Recognition and Privacy: Concerns and Solutions in the Age of AI. ISACA https://www.isaca.org/resources/news-and-trends/isaca-now-blog/2025/facial-recognition-and-privacy-concerns-and-solutions-in-the-age-of-ai Rogers, R.(2026, January 23) TikTok Is Now Collecting Even More Data About Its Users. Here Are the 3 Biggest Changes. Wired https://www.wired.com/story/tiktok-new-privacy-policy/

The Mind Is No Longer Human

- Posted in BP01 by

The Boundary That Used to Matter

For much of modern history, intelligence marked a clear boundary between humans and machines. Machines calculated; humans thought, created, and judged. Over the past five years, that boundary has begun to collapse. We now have generative artificial intelligence systems that are capable of writing essays, generating images, composing music, and simulating conversation. This has blurred the distinction between human cognition and machine processing in ways that feel identical to cyberpunk. What once belonged exclusively to the human mind is now shared with algorithmic systems, forcing us to rethink what it even means to think.

When Information Lost Its Body

This shift reflects what theorist N. Katherine Hayles describes as the moment when “information lost its body.” In her work on posthumanism, Hayles explains how cybernetics reframed humans and machines as systems of information rather than fundamentally different beings. Once intelligence is understood as a pattern instead of a biological trait, it no longer needs a human body to exist. Generative AI makes this idea real. These systems treat language, creativity, and reasoning as data that can be modeled, trained, and reproduced without a human brain. Intelligence becomes something that circulates through networks rather than something anchored to flesh.

Thinking With Machines, Not Just Using Them

This collapse of the human–machine boundary aligns closely with posthumanism, a central theme in cyberpunk. Posthumanism challenges the idea that identity or consciousness must be rooted in a stable, biological self. Humans no longer simply use technology, they think with it. People rely on AI for any task. In these moments, the human mind functions less as an original origin of thought and more as an interface within a larger system. This dynamic mirrors what philosophers Andy Clark and David Chalmers describe in their theory of the extended mind, which argues that cognition can extend beyond the brain into tools and environments. When external systems support thinking, they become part of the thinking process itself. Generative AI pushes this idea further than ever before. Intelligence is no longer purely human or purely machine, it is distributed across both.

High-Tech Progress, Uneven Consequences

As cyberpunk narratives warn, technological progress rarely benefits everyone equally. While corporations that control AI infrastructure gain enormous power and profit, everyday people face uncertainty and displacement. Cognitive labor, once considered uniquely human, is increasingly being devalued. This reflects cyberpunk’s familiar “high-tech, low-life” condition, which is rapid technological advancement paired with growing inequality and concentrated control.

Living After the Boundary Collapsed

The blurring of human and machine intelligence raises urgent questions. If machines can convincingly simulate thought, what remains uniquely human? Who owns creativity when AI systems are trained on collective human culture? And how do we preserve dignity in a world where cognition itself is treated as a resource to be optimized?

Cyberpunk has always insisted that the future arrives unevenly and prematurely. The collapse of the human–machine boundary is no longer unpredictable fiction it is a lived reality. Like cyberpunk protagonists navigating systems they did not design and cannot fully control, we are learning to survive in a world where intelligence has slipped its biological limits. The challenge now is deciding what kind of posthuman future we are willing to accept.

Sources

Page 4 of 7