Personal Privacy in the Digital Age

- Posted in BP01 by

enter image description here

Personal Privacy in the Digital Age

One of the defining features of cyberpunk fiction is the breakdown of boundaries between humans and machines, nations and corporations, and especially between public and private life. What once felt like a dystopian exaggeration is increasingly becoming reality. Over the past five years, the boundary between personal privacy and corporate/governmental surveillance has shifted dramatically. The line separating what belongs to the individual and what can be collected, analyzed, and sold has grown thinner than ever before. A clear contemporary example of collapsing privacy boundaries is emerging in Edmonton, where police have launched a pilot program using body cameras equipped with AI to recognize faces from a “high-risk” watch list in real time. What was once seen as intrusive or ethically untenable—the use of facial recognition on wearable devices—has now moved into operational testing in a major Canadian city, prompting debate from privacy advocates and experts about the societal implications of such pervasive surveillance.

Expanding Data Collection

Today’s apps and platforms gather far more than basic profile information. Social media companies track users’ locations, browsing habits, interactions with AI tools, and even behavioral patterns across different websites. For example, updates to privacy policies from major platforms like TikTok and Meta now allow broader data harvesting, often as a condition for continued use. Many users unknowingly exchange massive amounts of personal information simply to stay connected.

## The Rise of Biometric Surveillance Facial recognition technology has moved from science fiction into everyday life. Law enforcement agencies increasingly use AI-powered systems to scan crowds, identify individuals, and track movements in real time. While these tools are promoted as improving public safety, they blur the boundary between public presence and constant monitoring. People can now be identified and recorded without their knowledge or consent.

## Uneven Legal Protections Some governments have attempted to respond with new privacy laws, such as the European Union’s AI regulations and stricter data protection frameworks in countries like India. These laws aim to limit how companies collect and use personal information. However, regulations remain fragmented and often struggle to keep pace with rapidly advancing technologies. This leaves significant gaps where corporations can continue exploiting personal data.

What’s Driving This Shift?

Technology

Advances in AI and big data analytics make it incredibly easy to process enormous amounts of personal information. Facial recognition, predictive algorithms, and personalized advertising rely on constant surveillance to function. ## Economics Personal data is now one of the most valuable resources in the digital economy. Companies profit from targeted advertising, AI training, and personalized services built entirely on user information. Privacy has effectively become a currency.

Who Benefits—and Who Pays the Price?

Beneficiaries

  • Tech corporations that profit from user data

  • Governments that gain expanded surveillance capabilities

Those Impacted

  • Everyday individuals losing control over personal information
  • Marginalized communities disproportionately targeted by surveillance technologies
  • People wrongfully identified by biased AI systems

Associated Press. (2024). AI-powered police body cameras, once taboo, get tested on Canadian city’s “watch list” of faces. AP News.1[https://apnews.com/article/21f319ce806a0023f855eb69d928d31e

Blog Post #1: Eyes Everywhere; AI Surveillance

- Posted in BP01 by

Ever wonder who watches surveillance cameras beyond federal agents, police, and security personnel? Artificial intelligence has become quiet yet incredibly advanced—capable of tracking personal information and recognizing faces with astonishing accuracy. But where does AI store this information, and who has access to it?

Before the rise of AI, surveillance systems relied on continuous 24/7 recording that had to be carefully monitored by human caretakers. These individuals ensured that footage was not distorted, corrupted, or lost due to limited storage space. According to the Security Industry Association, AI can monitor and analyze network traffic in real time, strengthening network security and identifying suspicious activities such as unauthorized access attempts or unusual data transfers. When these activities are detected, users can take immediate action to block or contain potential threats.

enter image description here

While many argue that AI improves security, it also introduces significant challenges. One major concern is security breaches, as AI systems themselves can become targets for cyberattacks. Another issue is compliance, which is essential to avoid legal consequences and requires adherence to national and international regulations governing the use of AI. Addressing these concerns may require collaboration not only with AI technologies themselves but also with AI developers, cybersecurity professionals, and regulatory experts. AI holds the promise of a more holistic approach to security; however, many people place trust in AI without fully understanding where their data is stored or how it is used.

This shift reflects a cyberpunk-like reality where high technology is paired with low transparency where advanced technologies coexist with humans in everyday life. Surveillance cameras are now embedded into our devices, networks, and infrastructure, allowing AI to operate with minimal human oversight.

Facial recognition has advanced significantly over the decades and has blended seamlessly into daily life. According to Critical Tech Solutions, AI facial recognition combines imaging, pattern recognition, and neural networks to analyze and compare facial data. This process typically involves three steps: capturing facial data, converting faces into digital templates, and matching and verification.

As we progress in today’s world, AI will continue to grow smarter, stronger, and more human-like. It is ultimately our responsibility to establish boundaries to ensure that AI does not override human authority or become a tool for harm.

Sources

Dorn, M. (2025a, November 18). Understanding AI facial recognition and its role in public safety. Tech Deployments Made Simple by Critical Tech Solutions. https://www.criticalts.com/articles/ai-facial-recognition-how-it-works-for-security-safety/

Dorn, M. (2025, December 30). How ai surveillance transforms modern security. Tech Deployments Made Simple by Critical Tech Solutions. https://www.criticalts.com/articles/how-ai-surveillance-transforms-modern-security/

Galaz, V. (n.d.). Sciencedirect.com | Science, Health and medical journals, full text articles and books. ScienceDirect. https://www.sciencedirect.com/science/article/am/pii/S0160791X21002165

James Segil, M. S. (2024, April 23). How ai can transform integrated security. Security Industry Association. https://www.securityindustry.org/2024/03/19/how-ai-can-transform-integrated-security/

https://chatgpt.com/share/697574ec-b270-8003-8613-1bbb06691394

ChatGPT was used to craft an AI image and to revise my original thoughts to a more clear and organized writings.

A Letter to My CIA Agent

- Posted in BP01 by

Dear Sir, Madam, or Algorithm,

I assume you are reading this. Not because I have done anything remarkable, but because in a world shaped by digital systems, observation has become routine rather than something exceptional.

Five years ago, I still thought of privacy as something I possessed, imperfectly, maybe, but meaningfully. I assumed that my movements, conversations, and online habits were largely my own unless I chose to share them. That assumption has quietly worn away. Not through a single policy change or technological breakthrough, but through countless small decisions like agreeing to terms of service, enabling location access, and storing personal information in the cloud.

There was no clear moment when the boundary disappeared. It simply stopped being visible.

What has shifted most in recent years is not the existence of surveillance, but its structure. Governments increasingly rely on private companies to collect and organize personal data and then access it through legal requests or market transactions. According to reporting by Proton, authorities worldwide, particularly in the United States of course, have dramatically increased their requests for user data from major technology firms, often with limited transparency and oversight (Koch, 2025). In this arrangement, corporate data collection and state surveillance are no longer meaningfully separate.

This shift reflects a broader normalization of data as a form of currency. Individuals exchange personal information for convenience, connectivity, and access to digital services. Companies monetize that data. Governments acquire it. Each step is justified as efficient, legal, or necessary. However, when taken together, they blur the line between consent and compliance.

The American Civil Liberties Union has documented how U.S. agencies such as the Department of Homeland Security have purchased location data from brokers rather than obtaining warrants, effectively bypassing constitutional safeguards (Venkatesh & Yu, 2026). While the proponents argue this practice operates within existing legal frameworks, it raises important questions about whether privacy protections remain meaningful when personal data is treated as a commodity.

Similar patterns appear beyond the United States. In Jordan, authorities reportedly used phone-extraction tools to access activists’ devices, targeting political dissent through technological means (Kirchgaessner, 2026). These cases highlight how surveillance technologies are easily transferred across borders and contexts, and how they often impact those already vulnerable to state power.

Even technical protections such as encryption, which are framed as firm barriers to access, prove now to be conditional. In early 2026, Microsoft confirmed that it provided encryption keys to U.S. authorities when legally compelled to do so, prompting concern among privacy advocates about precedent and potential misuse (O’Brien, 2026). Security, it seems, depends less on technological limits than on institutional trust.

To be clear, surveillance systems are frequently defended on grounds of public safety, efficiency, and national security. These concerns deserve serious consideration. Yet the collective effect of extensive data collection and expanded access warrants equally serious scrutiny. Who benefits from this visibility? Who bears the risks? And how should societies balance collective security with individual autonomy?

I do not offer simple answers. What I do offer is a sense that we have crossed a boundary without fully acknowledging it. Privacy has now been redefined and negotiated continuously in ways that are often invisible to the people most affected. It is well on its way to completely vanishing.

Thank you for your time and attention.

Warm regards,

One of your many data points


References: Kirchgaessner, S. (2026, January 22). Jordan used Israeli firm’s phone-cracking tool to surveil pro-Gaza activists, report finds. The Guardian. https://www.theguardian.com/world/2026/jan/22/jordan-israeli-spyware-gaza-activists

Koch, R. (2025, February 27). Authorities worldwide can see more than ever, with Big Tech as their eyes. Proton. https://proton.me/blog/big-tech-data-requests-surge

O’Brien, T. (2026, January 24). Microsoft handed the government encryption keys for customer data. The Verge. https://www.theverge.com/news/867244/microsoft-bitlocker-privacy-fbi

Venkatesh, A., & Yu, L. (2026, January 12). DHS is circumventing Constitution by buying data it would normally need a warrant to access. American Civil Liberties Union. https://www.aclu.org/news/privacy-technology/dhs-is-circumventing-constitution-by-buying-data-it-would-normally-need-a-warrant-to-access