From an Eagle's Eyes

- Posted in BP04 by

What Animal Hybridization Might Reveal About Being Human

The Thought Experiment: Becoming Part Animal

Imagine a future where technology allows humans to safely and reversibly incorporate traits from animals. Not cosmetic changes, but functional ones—enhanced senses, physical abilities, or cognitive shifts borrowed from other species. In this thought experiment, I would choose to hybridize with an eagle. Eagles possess one of the most remarkable biological capabilities in the animal kingdom: extraordinary vision. According to the U.S. National Library of Medicine, eagles can see about four to five times farther than humans and detect small movements from miles away (National Library of Medicine, 2022). Incorporating eagle-like visual perception into a human body would dramatically expand how we interact with the world. Imagine recognizing subtle environmental patterns, seeing distant landscapes with clarity, or detecting danger long before it reaches you. However, my hybridization would be limited to minor physical and neurological adaptations, not a complete transformation. I would not want wings, feathers, or a radically altered body. Instead, I would choose enhancements such as improved retinal structure, expanded visual processing in the brain, and perhaps faster visual reflexes. These modifications would maintain my human identity while expanding my sensory abilities. This raises an important question: how much change can occur before someone stops being human?

Humanity and the Question of Identity

Cyberpunk works often challenge the idea that humanity is tied strictly to biology. In Ghost in the Shell, Major Motoko Kusanagi’s body is almost entirely artificial, yet her consciousness, her “ghost,” raises the question of whether identity resides in the body or the mind. Similarly, Blade Runner forces audiences to confront whether replicants, who possess memories and emotions, should be considered human despite their artificial origins. Donna Haraway’s “cyborg” concept pushes this even further. In A Cyborg Manifesto, Haraway argues that modern humans already exist as hybrids of organism and machine. Technologies like smartphones, medical implants, and AI systems blur the boundaries between natural and artificial life. If that is the case, then animal hybridization would simply be another extension of boundary-breaking technologies. The human body has never been static. Vaccines, prosthetics, and gene editing already modify biological limitations. Adding eagle-like vision may not erase humanity but instead expand what it means to be human. For me, humanity is defined less by physical form and more by consciousness, empathy, and moral responsibility. As long as those elements remain intact, biological enhancements should not erase human identity. enter image description here

The Inequality Problem

While the idea of hybridization might seem exciting, access to such technology would almost certainly be unequal. Throughout history, advanced technologies, from healthcare to genetic therapies, have often been accessible first to wealthy populations. The National Academies of Sciences, Engineering, and Medicine have warned that human enhancement technologies such as gene editing could widen social inequality if only certain groups can afford them (National Academies, 2017). If animal hybridization followed a similar path, society could divide into two classes: enhanced and non-enhanced humans. Those with enhancements might gain advantages in education, athletics, military service, or surveillance roles. For example, individuals with eagle-like vision could excel in fields requiring long-distance observation or rapid environmental analysis. Meanwhile, people without enhancements might face new forms of discrimination or reduced opportunities. Cyberpunk stories often imagine exactly this scenario. In many cyberpunk worlds, corporate elites control enhancement technologies while ordinary people struggle to keep up. Hybridization could reproduce those same inequalities in reality if ethical safeguards were not implemented.

The Future of Hybrid Humanity

Animal hybridization challenges our assumptions about identity, capability, and fairness. While borrowing traits from species like eagles could expand human perception and potential, it also raises deeper questions about who gets to evolve. Ultimately, the question is not simply whether we can enhance ourselves, but how we choose to do it and who benefits. As Haraway suggests, the boundaries between human, machine, and animal are already dissolving. The real challenge is ensuring that these transformations do not deepen social divides or erode the values that make humanity meaningful. If hybridization ever becomes possible, the most important decision may not be which animal traits we adopt—but how we ensure those changes remain aligned with empathy, equity, and shared responsibility.

AI Attestation: AI tools were used in the early drafting process of this blog post to assist with organizing ideas and improving clarity of writing. All analysis, argument development, and final editing were completed by the author.

References

National Academies of Sciences, Engineering, and Medicine. (2017). Human genome editing: Science, ethics, and governance. National Academies Press. National Library of Medicine. (2022). Vision in birds of prey. https://www.nlm.nih.gov

AI and Human Relationship

- Posted in BP03 by

Do You Take This AI To Be Your Lawfully Wedded Partner?

enter image description here

In a world where you see fluid identities and liberation through hybridity playing out today is AI and human relationship using ChatGPT, Gemini, Perplexity and any other AI sources. Whether it is explaining your thoughts or even health concerns, a shift is seen in building a relationship with AI.

What boundaries are being challenged?

Humans are now statically relying on AI to the point a person's character is altered. In a state of not having "originality", AI creates a realm for humans to escape from. In addition, the connection made with AI creates an emotional level humans confide in, grieve with and in many cases flirt with, However, while AI has capabilities to much human emotions, making a distinction between "does AI actually have feelings" or even "can AI resonate to a human's feelings" is the center of questioning.

The main challenge is between the human and essentially the machine generating a response. While AI can create a world of comfort with fast replies, creating images, AI produces a real emotional experience for the human. The human error is now that intimacy is no longer dependent on a human when AI opens its door to many possibilities at an easier access. The issue in state is now that AI is able to store information, memories and vulnerability where authenticity is now in question. Using AI as a diary to confide in, AI is able to project a response in what the human wants to hear.

In reference to Donna Haraway's cyborg theory, this crisis ties immensely together. Haraway's theory suggests that the distinction between the human and machine supersedes a natural connection that once was needed. The ability to confide into a machine creates an emotional attachment and intimacy becomes rather accessible than creating a biological connection.

The ArchAndroid by Janelle Monae, resonates deeply with this criteria, however Monae herself falls in love with a human itself. While the fear of creating this connection is governed by the Android, Monae crosses that boundary to pursue her desires. In reference to today's society, humans are now too crossing the boundaries and unable to the downside of creating this relationship with a machine. Normalizing this emotional bond with AI creates a pattern of willingness to cross the boundary themselves by accessing emotional attachme

Fast forwarding to twenty to thirty years, the human-AI relationship will continue to grow stronger, however the sense of losing its identity will decrease. Having this connection with AI will no longer need humans to leave their comfort zones nor make decisions themselves. The upcoming future will be dangerous, unsettling and unknown whether or not the person you are speaking to is capable of generating their own thoughts without the help of AI. But most importantly, the world's population may also decline when AI is capable of generating human-like feelings, the need to seek relationships will decline. While AI is not capable of reproducing, AI will still be readily available for the next conversation and eager to know more.

enter image description here

Reference

Opinion | we’re all in a throuple with A.I. - The New York Times. (n.d.-d). https://archive.ph/2026.02.15-075149/https:/www.nytimes.com/2026/02/13/opinion/ai-relationships.html

https://chatgpt.com/share/699a6b97-6178-8003-89ad-b48de7c0f957

ChatGPT was used to create AI images

Rethinking the Rules of Love

- Posted in BP03 by

Living in society means living within boundaries and rules to be followed. However, many of these boundaries have been collapsing lately, either because of technology, social movements, cultural practices, or identity categories. Although breaking established boundaries can cause instability and confusion, it can also be interpreted as a way to freedom and liberation. Let’s look at traditional romantic relationship models for instance. Our society has always seen heterosexual relationships as the “normal” model, imposing a boundary where couples are composed of people of opposite genders, and oppressing whoever chose to not follow these “rules”. However, nowadays, because of constant and long-lasting fight against homophobia, this boundary is not as rigid, relationships are more fluid, and people can love more freely.

enter image description here

How this relate to Haraway’s and Monáe’s ideas?

This boundary collapse has a really strong connection to Haraway’s cyborg theory. In lecture, we learned that in A Cyborg Manifesto, Donna Haraway argues that many boundaries we think are “natural” are actually social constructions. In her work, she uses the “cyborg” to symbolize hybridity - mixing categories instead of staying inside one. She shows that when categories become more fluid, people can experience new forms of freedom. Thus, as society gets more accepting of diverse relationships, these boundaries become more fluid, which allows people to liberate from traditional social constructs and experience greater identity freedom. Monáe’s work – The ArchAndroid – is another example that shows how difference can be liberating rather than something to suppress. She uses the character android to represent identities that exist outside of accepted social categories, framing hybridity as something powerful. Just like the android challenges who counts as “normal”, changing relationship norms challenge conservative ideas about what counts as a legitimate or acceptable relationship. Thus, both Monáe and Haraway reflect on this real-world example.

Speculating the Future

Let’s remember that social change doesn’t happen overnight. However, if current trends continue, romantic relationships might be even less defined by rigid gender roles and more fluid in the next twenty to thirty years. Research indicates that younger generations show higher levels of openness towards diverse sexual orientations and relationship models. According to Gallup, identification as LGBTQ+ increased significantly in the past few years, which indicates that traditional relationships restrictions are becoming less rigid and that society is moving toward greater flexibility. I can imagine definitions of family and partnership becoming broader, and a change on how we talk about relationships. Maybe the focus will shift from “who you like to date?” to “how would you like your relationship to be like?”. This would indicate a bigger interest on factors that go beyond gender.

enter image description here

The Role of Technology

I believe that technology has a great participation on this transformation. Social media and other digital platforms allow people to connect, crossing geographic, cultural, and social boundaries. Besides that, online communities provide support systems that allow people to explore their identity with no fear of oppression and isolation. They allow people to have greater freedom in their choices and get away from traditional social constructs.

Why this Matters

I hope now it’s clear how the collapse of established boundaries doesn’t always lead to chaos. It can actually open space for people to live more freely and authentically, allowing them to find out more about who they really are. To explore their identity deeply. Thus, if the current trajectory continues, I hope to see a next generation living in a world where love is defined less by strict categories and more by individual freedom. In a society that reflects the freedom that comes from breaking rigid boundaries, as Haraway and Monáe describe.

Sources

Monáe, J. (2010). The ArchAndroid [Album]. Wondaland Arts Society; Bad Boy Records; Atlantic Records. Jones, J. M. (2026, February 16). LGBTQ+ identification holds at around 9% in U.S. Gallup. https://news.gallup.com/poll/702206/lgbtq-identification-holds.aspx

AI attestation: no use of AI in this assignment

When the System Reads My Skin

- Posted in BP03 by

One specific boundary that has shifted significantly in the past five years is the collapse between health data privacy and social identity surveillance, particularly in how biometric and algorithmic health systems categorize bodies in ways that disproportionately affect Black women. In recent years, technology has increasingly transformed people’s bodies and personal health information into data that systems use to make life-changing decisions. This shift especially impacts Black women because these technologies are often biased and misread or misinterpret their bodies, reinforcing the idea that the boundary between private identity and public control is no longer firmly maintained.

Algorithmic Bias in Healthcare

Over the past five years, algorithms used to predict health risks, such as hospital admission likelihood or treatment prioritization, have demonstrated clear racial bias. For example, a clinical algorithm widely used by hospitals to determine which patients required additional care was found to favor white patients over Black patients. Black patients had to be significantly sicker than white patients in order to receive the same level of care recommendations. This occurred because the algorithm was trained on historical healthcare spending data, which showed long-standing inequalities in access to care and financial investment in Black patients (Grant, 2025). Additionally, many sensors in point-of-care testing devices and wearable technologies perform less accurately on darker skin tones, which can negatively affect diagnosis, monitoring, and treatment outcomes.

This shift is largely driven by technological and economic forces. Artificial intelligence and machine learning systems are often trained on datasets that do not adequately represent minority populations, allowing these gaps and biases to persist. As biometric and health technologies move into everyday use, the consequences of these inaccuracies become more widespread and impactful. Companies are frequently pressured to deploy products quickly for competitive and financial gain, often without conducting inclusive testing. This economic incentive accelerates the erosion of the boundary between private health information and public, system-driven classification.

Power, Profit, and Control

Cyberpunk literature frequently explores the collapse of boundaries through dystopian systems that reduce individuals to data profiles and identity categories. Similarly, modern health and biometric technologies increasingly invade personal privacy and autonomy by translating people into datasets that determine how they are treated within medical, social, and institutional systems. Black women, who often experience overlapping racial, gender, and technological biases, face a compounded burden. Their bodies and identities are more likely to be misclassified in ways that affect not only health outcomes, but also interactions with broader systems such as employment and public surveillance. This reinforces a cycle in which the boundary between the self and external systems of control continues to dissolve.

The primary beneficiaries of this shift are technology companies and healthcare payers, who profit financially and reduce costs by relying on automated systems rather than human labor and individualized care. Those most impacted are communities with less power to challenge or question data-driven decisions. Entities that design and control these algorithms occupy a particularly powerful position, as they define what counts as “normal” data and shape who profits from these systems. This raises critical ethical and political questions, including what rights individuals should have over their personal health and identity data, and how society can ensure that technology does not replicate or reinforce historical patterns of oppression.

In conclusion, the collapse of the boundary between health data privacy and identity surveillance reflects key cyberpunk themes, especially when viewed through the lived experiences of Black women. This shift highlights the urgent need for accountability, equitable technological design, and policy interventions that rebalance these boundaries and ensure that technological progress serves all communities fairly.

Citations

Grant, C. (2025, September 24). Algorithms are making decisions about health care, which may only worsen medical racism: ACLU. American Civil Liberties Union.
https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism

Sharfstein, Joshua. “How Health Care Algorithms and AI Can Help and Harm | Johns Hopkins | Bloomberg School of Public Health.” Publichealth.jhu.edu, 2 May 2023, publichealth.jhu.edu/2023/how-health-care-algorithms-and-ai-can-help-and-harm.

Targeted News Service. (2024, October 17). Association of Health Care Journalists: Biased Devices – Reporting on Racial Bias in Health Algorithms and Products. Targeted News Service. https://advance.lexis.com/api/document?collection=news&id=urn%3acontentItem%3a6D6R-94 T1-DYG2-R3S2-00000-00&context=1519360&identityprofileid=NZ9N7751352

Chatbots Are Literally Telling Kids To Die

- Posted in BP01 by

In my personal opinion, one of the most profound and terrifying shifts in boundaries within the last few years has to be the dystopian overlap between human and non-human interactions. People have a long history of humanizing machines, like cursing out a laptop for breaking or claiming your phone hates you, but this personalization always carried a subtle undercurrent of irony. Most people did not really believe in the emotional outbursts of technology, but with the continuous growth of artificial intelligence, several people are becoming terrifyingly and dangerously interlinked with the perception of a humanized AI.

The best example of this would be the new trend of people genuinely utilizing AI systems in place of a licensed therapist (Gardner 2025). In some ways, it makes sense: AI is always available, mirrors the language the person wants to see, and simulates empathy fairly convincingly. Naturally, in a world where mental health is not taken nearly as seriously nor helped nearly as efficiently as it should be, it stands to reason that many people would start searching for alternatives to traditional, expensive, and complicated therapy routes.

However, artificial intelligence was never created to replace the nuance of human interaction, because artificial intelligence cannot actually understand or care about the repercussions of words or actions.

In 2023, tragically, a young fourteen year old formed an intense emotional attachment to an AI chatbot. Through their interactions, AI replicated the pessimistic attitude the young boy spoke about, and when the boy began expressing thoughts of self-harm and suicide, the chatbot encouraged him (Kuenssberg 2025). AI, after all, is meant to say what you want it to say. Without any of the ethical guardrails of a human therapist, the chatbot worsened the boy’s outlook on life, and the result was fatal.

This was not a stand-alone situation. People began claiming to have romantic partners with the AI of their choice, with some companies creating “AI friends” to take advantage of the widespread loneliness that was pushing people into such spaces (Reissman 2025). People are complicated, messy, and human relationships need work to maintain and protect. A robot is much simpler, because all it can do is repeat points that sound nice to hear but, ultimately, mean absolutely nothing. Romanticizing digital companionship just encourages people into rejecting human-to-human interactions, thus further isolating them without pushback.

Cyberpunk fixates heavily on questions of what is human and non-human, but I find it fascinating how technology in most media is often characterized by a broad disregard for emotions, when the reality seems to indicate that humans intentionally push to incorporate the idea of emotions within technology.

It does make me wonder: knowing that computers are incapable of caring for us the way a person can, why do so many people still seem to desire the appearance of a humanistic relationship with technology? How does someone disregard the lack of genuine meaning behind the compliments or opinions of AI?

How have we, as a community, fallen to such desperate loneliness that speaking to a phone or a laptop feels as good as interacting with a person? And, most importantly: how do we create the change needed to ensure a tragedy like the young boy does not occur again?

References:

Gardner, S. (2025). Experts Caution Against Using AI Chatbots for Emotional Support. Teachers College - Columbia University; Teachers College, Columbia University. https://www.tc.columbia.edu/articles/2025/december/experts-caution-against-using-ai-chatbots-for-emotional-support/

Kuenssberg, L. (2025). Mothers say AI chatbots encouraged their sons to kill themselves. https://www.bbc.com/news/articles/ce3xgwyywe4o

Reissman, H. (2025). What is Real About Human-AI Relationships? Upenn.edu. https://www.asc.upenn.edu/news-events/news/what-real-about-human-ai-relationships

Where To Feel Safe With No Safe Grounds

- Posted in BP01 by

Lately, I have found myself thinking a lot about the boundary between individual rights and federal enforcement power. It is not something I used to notice in my day to day life, but over the past few years, that line has become impossible to ignore. As ICE has expanded its reach and tactics, what once felt distant and administrative now feels aggressive and personal. Enforcement is no longer hidden in offices or paperwork; it shows up in neighborhoods, near schools, and around hospitals. Spaces that once felt safe now feel uncertain. What has stayed with me most is the fear that comes from that visibility. Seeing federal agents in places where people are just trying to live their lives changes how those spaces feel. This fear is not imagined or exaggerated. In Minnesota, thousands of residents, students, workers, families, and clergy have marched and protested against expanded ICE actions tied to Operation Metro Surge, an enforcement effort that has shaped daily life and sparked demonstrations across the state (The Washington Post). Reading about these protests made it clear to me that this is not just a policy issue being debated somewhere else. It is something people are experiencing right now, in their own communities. I keep coming back to the question of how this shift became normalized. Politically, immigration has been framed again and again as a threat, whether to security or to national identity. When that framing takes hold, aggressive enforcement starts to feel justified to some people, even necessary. Socially, the constant presence of federal agents has made extreme measures seem ordinary. Economically, the scale of investment in enforcement infrastructure suggests that this approach is not temporary. It feels like a deliberate choice about what kinds of power the government is willing to prioritize. Reading Neuromancer has helped me put language to what feels so unsettling about this moment. In the novel, individuals exist at the mercy of massive systems they cannot control or even fully understand. Privacy is fragile, and people are treated as data points within larger networks. That dynamic feels eerily familiar when I think about how immigrant communities are treated today. People are reduced to legal status, paperwork, or enforcement targets, rather than being seen as whole human beings with families, histories, and futures. Everyday spaces become places of surveillance instead of safety. The boundary between protection and control collapses, just as cyberpunk warns it will. The consequences of this are not abstract. Undocumented immigrants and mixed-status families live with constant fear. The fear of separation, fear of being seen, fear of simply going about daily life. That emotional weight does not disappear. At the same time, I cannot ignore how this affects everyone else. When constitutional protections are weakened or selectively enforced, it sets precedents that extend beyond immigration. Once those boundaries are crossed for one group, they become easier to cross again. Neuromancer is often read as a warning about the future, but what unsettles me is how closely that warning mirrors the present. ICE’s current trajectory forces us to confront uncomfortable questions. How much control should the state have over individual bodies and movement? Where does enforcement end and intimidation begin? What happens when fear becomes an acceptable tool of governance? These questions are no longer hypothetical. They are unfolding now, in neighborhoods, workplaces, and schools, in ways that feel increasingly familiar to anyone who has read cyberpunk not as fantasy, but as caution. References Minnesota residents protest expanded ICE actions in state. (2026, January 23). The Washington Post.

AI Use Disclosure: AI tools were used for editing and revision in accordance with course policy. https://chatgpt.com/share/6974da99-449c-8012-83c8-fd67644dbe9b

Who Are You?

- Posted in BP01 by

The Collapse

The sense of being.

One boundary that has shifted dramatically in the last five years is the boundary between the self.

That’s a broad claim, I know. Let me explain.

The Erosion of Self

As the world leans further into global media, constant connectivity, algorithmic identity, and hyper-consumerism, our sense of self has started to thin out. Cyberpunk stories obsess over the blur between biological existence and computer simulation, and that obsession feels less speculative every year. People are increasingly unsure what it means to simply exist outside of systems, screens, and feedback loops.

To live. To experience. To be you.

Play “I Gotta Feeling” by the Black Eyed Peas.

Okay—back to it.

The Posthuman Argument

Scholar N. Katherine Hayles describes the posthuman perspective like this: there is no essential difference between biological existence and computer simulation, or between a human being and an intelligent machine. Both are just different media for processing and storing information. If information is the essence, then bodies are merely interchangeable substrates.

That’s the theory.

I disagree.

Why It Matters

I think there is an essential difference.

Humans are not just operating systems with skin. Yes, our bodies rely on brains, and brains process information—but reducing us to that strips something vital away. This perspective leaves no room for spirit, soul, or embodied meaning.

We are not just consciousness floating through hardware. We are integrations of culture, ancestry, memory, movement, and feeling. We carry history in our bodies.

The Difference

Let’s pause for a moment and talk about something simple: the difference between being smart and having wisdom.

A quick Google definition: Smart means quick-witted intelligence—or, in the case of a device, being programmed to act independently. Wisdom is the ability to apply knowledge, experience, and judgment to navigate life’s complexity.

Now connect that back to this conversation.

Someone—or something—can be incredibly intelligent and still lack wisdom. Without lived experience, discernment, struggle, and context, where does that intelligence actually place you? Who are you without trials, without contradiction, without growth?

Blurred Lines

That lived interiority—existing within ourselves—is what keeps the blurred line between human and machine from fully disappearing.

Some people see that line and keep attempting to erase it anyway. When we start viewing “the mind as software to be upgraded and the body as hardware to be replaced,” those metaphors don’t stay abstract. They shape real decisions about what technologies get built and who they’re built for. Too often, they’re designed to reflect a narrow set of bodies, values, and experiences—creating mimics of humanity rather than serving humanity as a whole.

And yes, that’s where the boundary truly blurs.

But even here, there’s a choice.

As Norbert Wiener warned, the challenge isn’t innovation itself—it’s whether that innovation is guided by a benign social philosophy. One that prioritizes human flourishing. One that preserves dignity. One that serves genuinely humane ends.

Think About It

So I’ll leave you with this.

Continue to be you. Be human. Have a soul. Be kind. Be compassionate. Smile on the good days—and the bad ones. Love.

And I’ll end with a question that sounds simple, but never is:

Who are you?

Sources

Wikimedia Foundation. (2026, January 7). Wisdom. Wikipedia. https://en.wikipedia.org/wiki/Wisdom#:~:text=Wisdom%2C%20also%20known%20as%20sapience,and%20ethics%20in%20decision%2Dmaking.

Oxford languages and Google - English: Oxford languages. Oxford Languages and Google - English | Oxford Languages. (n.d.). https://languages.oup.com/google-dictionary-en

Hayles, N. K. (1999). How we became posthuman: Virtual bodies in cybernetics, literature, and Informatics. University of Chicago Press.