- BrightMinds-AI
- Posts
- AI Takes the Stand: A Breakthrough in Justice or the Breakdown of Personhood?
AI Takes the Stand: A Breakthrough in Justice or the Breakdown of Personhood?
BrightMinds Bulletin
Exploring how legal definitions of identity shape the future of human and machine agency.
The Legal Weight of Personhood
In the United States, legal definitions of “personhood” have evolved significantly over time. While originally intended to guarantee rights and protections to individual citizens, certain landmark cases have expanded these definitions to include non-human entities. These decisions, though made in specific historical contexts, have often shaped the trajectory of law, governance, and public life far beyond their original scope.
One early and surprising shift came not in a case about individual rights, but about taxation. In 1886, the U.S. Supreme Court heard Santa Clara County v. Southern Pacific Railroad Co. Though the case centered on how taxes were applied to railroad property, it is remembered for a headnote-a summary written by the Court reporter-stating that the Justices believed corporations were protected by the Fourteenth Amendment.
This small inclusion, despite not being part of the formal ruling, set a powerful precedent. Over time, corporations began claiming constitutional rights once thought to belong only to living, breathing individuals- equal protection, due process, and eventually, freedom of speech.
The Expansion of Non-Human Rights
Legal momentum continued. In 1978, the Supreme Court decided First National Bank of Boston v. Bellotti, ruling that corporations had the right to spend money in political campaigns. Decades later, in Citizens United v. FEC (2010), corporate political spending was again defended as a form of protected speech under the First Amendment.
These decisions were built on the logic that entities without physical bodies or human consciousness could still exercise rights originally framed for individuals. The effects have been wide-ranging, reshaping campaign finance, corporate influence, and the relationship between power and personhood.
A New Voice at the Witness Stand
The recent case of Christopher Pelkey in Chandler, Arizona, marks a significant moment in legal history. Pelkey, a 37-year-old Army veteran, was fatally shot during a road rage incident in 2021. In a groundbreaking move, his family utilized artificial intelligence to create a video where Pelkey, through an AI-generated likeness and voice, delivered his own victim impact statement during the sentencing of his killer, Gabriel Horcasitas.
This marked what is believed to be the first use of AI in a U.S. courtroom to recreate the voice and message of a deceased individual. The judge acknowledged its emotional power and ultimately sentenced the defendant to 10.5 years, exceeding the recommendation from prosecutors.
Was this justice enhanced by technology, or was a line blurred between authenticity and simulation?
Pause for Perspective: What Does It Mean to Be Heard?
AI takes the stand... breakthrough or breakdown? This moment isn’t about whether artificial intelligence can speak for us. The question is: why do we want it to?
When I first learned of this case, my initial reaction was discomfort, uneasy hesitation about where this might lead. I couldn’t help but think about how far-reaching one seemingly simple act can become. As I listened to the victim’s sister describe the thoughtfulness behind the AI statement and the gratitude expressed by family members, I also wrestled with the reality that the man himself, Christopher Pelkey, was not there to speak for himself.
This, to me, is the paradox: a tool designed to preserve memory and meaning may unintentionally assume a voice it cannot truly own.
As someone who has spent years advocating for ethical technology in human-centered systems, I won’t take a side here. But I will offer a question:
Are we inviting AI into once-sacred spaces because carrying the full weight of human experience feels unbearable?
And if so, will there come a time when we look back with any quiet regret for the moments we've outsourced or find the experience was diluted or softened in some way?
The law will adapt; the lines may continue to blur. But before we rush forward, we must pause and ask: What does it mean for justice, for memory, for humanity, when we let machines speak in our place?
Emerging Technologies and Ethical Tension Points
As technology continues to advance, we should find ourselves revisiting old legal and ethical questions in new ways. The past gave us corporations as legal persons. The present presents artificial intelligence speaking with a human face. What comes next is uncertain, but the importance of ethical discernment has never been clearer.
Legal scholars Citron and Pasquale (2011) argue that the law has begun extending protections to algorithms without demanding equivalent responsibilities. This asymmetry raises a caution: when the tools of technology mimic personhood but lack conscience, who becomes accountable for their actions? Meanwhile, the appearance of ethical sophistication as an AI-generated courtroom statement can obscure deeper concerns about agency, consent, and authenticity.
Stephanie Schechter’s (2024) work in psychoanalytic ethics offers an important lesson across disciplines: complex ethical challenges cannot be resolved through rigid rules alone. “We do not find perfect solutions because they do not exist,” she writes. “Our uncertainty is acknowledged, discussed, and grappled with.” Ethical growth, she argues, comes from structured, safe exploration of hard questions.
Every system is at risk when it involves the impact on people. But perhaps the most important place to begin is in schools, while young adults are in the process of developing their sense of identity and finding their place in the world. During this time, they can learn to hold complexity, consider far-reaching implications, and ask difficult questions. Not just can we do this, but should we?
We must teach ethical questioning, not just ethical compliance.
Looking Forward: Teaching the Questions
If I could ask anyone preparing to implement AI one thing, it would be this: Why? Why are you bringing this technology, and what do you hope to gain from it? We must push for clarity of purpose, not just innovation for innovation’s sake.
There is tremendous potential for good. AI can help students receive instant feedback on their writing, giving them insight into their strengths and weaknesses. It can support teachers, doctors, designers, and social workers alike. But only when guided by human judgment and ethical reflection.
We must also resist the fear-driven instinct to ban what we don’t understand. Just as we wouldn’t expect a pilot to master graphic design software, we shouldn’t expect every person to master every new piece of technology. But we can create cultures of openness, spaces where it is okay to ask questions, seek help, and say “I don’t know.”
I want readers to feel hopeful, not helpless. I want them to reflect on what they may not yet understand- and realize that this uncertainty is normal, even necessary. Because hope in people, in communities, and in our collective capacity to adapt is what will carry us forward. We don’t need perfect answers. But we do need each other.
References
Citron, D. K., & Pasquale, F. (2011). The scored society: Due process for automated predictions. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1732910
Schechter, S. (2024). Ethics Education in Psychoanalytic Institutes. Psychoanalytic Inquiry, 44(2), 178–193. https://doi.org/10.1080/07351690.2024.2306104
United States Supreme Court. Santa Clara County v. Southern Pacific Railroad Co., 118 U.S. 394 (1886). https://supreme.justia.com/cases/federal/us/118/394/United
United States Supreme Court. Citizens United v. Federal Election Commission, 558 U.S. 310 (2010). https://supreme.justia.com/cases/federal/us/558/310/
AP News. (2024, April). Family uses AI to create video for deadly Chandler road rage victim’s own impact statement. https://apnews.com/article/952526055d4ed612bc5cf55fa8dea208