When a judge in Arizona recently sentenced a man for manslaughter, he praised a victim impact statement delivered not by a grieving family member, but by an AI-generated likeness of the dead.
The court called it “powerful.” Charles Dickens might have called it terrifying.
IT WAS THE BEST OF TIMES, IT WAS THE WORST OF TIMES.
It was the age of information, it was the age of misinformation, it was the epoch of unfettered certainty, it was the epoch of concealed insecurity, it was the season of Light, it was the season of Darkness, it was the spring of creativity, it was the winter of fragility, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way...
In short, our era is much like the one Charles Dickens warned us about more than 160 years ago: a tumultuous tipping point where the loudest voices demand popular compliance, flatten complex truths into uncompromising absolutes, classify every moment as either fully justifiable or unforgivable, and erase nuance in favor of sanctimonious spectacle.
And it is here where we find ourselves standing alongside Artificial Intelligence at a crossroads that will define the next century of being human: a renaissance of inexhaustible creativity, or a dark age of dystopian misuse.
Now we must decide: will we allow a reckless minority to weaponize AI to engineer emotions and manipulate outcomes, or will we choose a path where AI helps us create our way back to ourselves, and each other?
LIKENESS: A TOOL TO ACQUIT, A TOOL TO CONVICT
When a judge in Arizona recently sentenced a man for manslaughter, he praised a victim impact statement delivered not by a grieving family member, but by an AI-generated likeness of the dead.
The court called it “powerful.” Charles Dickens might have called it terrifying.
More than 160 years ago, Dickens penned A Tale of Two Cities to warn what happens when emotion overtakes law, vengeance masquerades as justice, and how the likeness of two men acts as the determinant of their fates through the juxtaposition of London and Paris during the French Revolution.
In London, Charles Darnay's striking resemblance to Sydney Carton is enough to acquit Darnay of treason, as their likeness renders the jury unable to prove guilt beyond a reasonable doubt — and that very resemblance stirs anguish in Sydney Carton. Upon Darnay's acquittal, Carton takes Darnay out for his first dinner as a free man, and the more Carton drinks, the more he loathes Darnay, as he saw the way that the woman he loves (Lucie Manette) was falling in love with Darnay — and while their resemblance was intentionally used by the defense and led to Darnay’s acquittal, it is that very resemblance that caused Carton to see in Darnay the man he could have been, and the love he never had to begin with because of it.
Now, we find Charles Darnay on the stand again — this time, in a Paris unrestrained by the temperance of law. Though now a Londoner, Darnay was born in Paris, an heir to the brutal, abusive aristocratic Evrémonde family — but had rejected his title and moved to England, wanting no part in their cruelty or legacy.
In London, the likeness of two men preserved the principles of justice — it created doubt and thereby protected the innocent.
In Paris, however, Darnay did not find a justice system ruled by reason, but instead, one overwhelmed by grief — he entered a courtroom without the presumption of innocence until proven guilty beyond a reasonable doubt. Instead, he entered a courtroom already captive to public perception, as the resounding cry was death to the innocent as well as the guilty.
The revolutionary tribunal never intended to evaluate evidence, but instead, to perform righteousness — and when a judicial system determines guilt not as it correlates to an individual's actions and intent, but instead, on the basis of symbolism, likeness is no longer a tool for proving innocence, as it was in London — instead, in a revolutionary Paris, trapped under the fog of grief and the echo chamber of the mob, it was Darnay's likeness in his bloodline alone that proved him guilty by association and sentenced him to the guillotine — yet, it is Sydney Carton who inserts himself under the guillotine's blade, using his resemblance to Darnay to sacrifice himself.
Prior to the French overthrowing the aristocracy, Dickens introduces us to Monsieur Gabelle, the Evrémonde's tax collector and estate manager, and it is Gabelle who forewarns of the thunder of revolution rumbling quietly but currently at a distance, telling the Evrémondes, “There is a ferment in the air... but there is still time to remove the cause of this unrest," and instead of meeting the occasion to course-correct, the Evrémondes dismiss Gabelle's urging to pivot the power-dynamic, blinded by arrogance and entitlement to see the fragility of a system built on the backs of the broken. In doing so, they seal their fate.
While the guillotine was once hailed by the French as a tool for delivering swift justice, the reality is that justice never needed a stage, and truth never needed rendering. And if we use AI as a weapon to sway courts, we haven't built a better future — just a cleaner blade.
STATE V. HORCASITAS
On November 13, 2021, Gabriel Horcasitas, 54, was stopped at a red light behind 37-year-old Army veteran Christopher Pelkey. As noted in the appellate decision State v. Horcasitas, Horcasitas repeatedly honked at Pelkey, who then got out of his truck and approached Horcasitas' vehicle while waving his arms and shouting threats.
The road rage incident turned deadly when, seconds later, Horcasitas shot and killed Christopher Pelkey.
Horcasitas was arrested, and the state of Arizona charged him with murder — but the first trial ended with the jury convicting him of the lesser charge of manslaughter (and endangerment due to the second bullet striking a nearby vehicle), because, more likely than not, the state prosecutors failed to prove Horcasitas' actions were premeditated — a required factor to convict a defendant of first-degree murder.
Following the first trial, the defense appealed the conviction, proving that state prosecutors failed to disclose critical evidence collected from Pelkey's phone. The withheld evidence revealed text messages indicating Pelkey's mental state and history of aggressive driving tendencies, supporting the defense's claim of self-defense. Additionally, they demonstrated that prosecutors appointed by the state of Arizona violated both federal and state laws and procedures requiring the disclosure of exculpatory evidence.
The Arizona Court of Appeals agreed, vacated the conviction, and ordered a retrial.
The retrial began in March 2025, and those who placed their bets on state prosecutors proceeding with caution in the second trial will pay through the nose if they bet too high — as the state rolled the dice again — this time relying on speculative storytelling and legal acrobatics to influence sentencing.
DEEPFAKE IMPACT
Following the arguments made during the retrial, the jury once again found Horcasitas guilty of manslaughter and endangerment — and on May 1, prior to sentencing, Christopher Pelkey's loved ones were called upon to deliver impact statements prior to sentencing.
Many flew in to do so; however, Pelkey's sister Stacey Wales struggled to articulate the depth of her ongoing grief after losing her brother too soon. Instead, she decided to write what she thought her brother would say, and it is Stacey's writing exercise in grief that became the script for the AI-generated deepfake of Christopher Pelkey delivering what the AI called his "own victim impact statement."
To summarize for those who have not watched this video, it included the AI-generated deepfake of Chris Pelkey and footage of him discussing his tours in the Middle East, his relationship with God, and the importance of community, followed by the deepfake thanking Judge Lang for overseeing the retrial despite scheduling conflicts with his daughter's spring break.
The deepfake went on to speak directly to Horcasitas, offering him forgiveness, followed by life advice for his loved ones — including to embrace the gift of age, showing a picture that Chris Pelkey took years ago that applied an age-filter, saying that this was "the best I can ever give" his loved ones of what he might look like if he "got the chance to grow old."
After the trial, Wales explained to Reuters that she was unable to forgive Horcasitas, but "felt her brother would have a more understanding outlook." She went on to say that she wrote what became the AI-generated transcript "to humanize Chris, reach the judge, and let him know his impact on this world and that he existed."
State prosecutors missed an opportunity to deliver what would have been of even deeper impact: they could have asked Stacey Wales to read the transcript she wrote on her brother's behalf, but preface it by saying to the court, "I could not begin to find the words to convey the impact that my brother's untimely death has had on me, so I had to imagine what he would say," followed by reading it herself — that would have been deeply human.
Instead, they found a new way to wield power as they pursued the likeness of the dead to condemn the living. In doing so, they turned law into a performance and set a new precedent: it is about how well you performed and how hard you cried, rather than what is just. What makes it more terrifying isn't that it's a private attorney acting on their own volition, but that the state of Arizona is the prosecutor — and when grief becomes a tool of state prosecutors, who is safe from the blade?
While our hearts can, and should, go out to Christopher Pelkey's family, we must not allow a judicial system to be reduced to sentencing defendants as determined by humanity's grief. And it is for that reason that the use of AI to generate posthumous victim impact statements should not be permissible in court, even after jury deliberations have concluded, as it sets the precedent for unchallenged emotion to be presented as “evidence" — setting the stage for undue influence to become legally permissible, where juries issue convictions based on prejudicial impact rather than proof, and judges become swayed not by law, but by sympathy, fear of backlash, or the emotional theater unfolding before them.
After the court's viewing of the video that day, it wasn't just the state prosecutors who failed, but also Judge Lang, who missed a critical moment that could have been his very own Monsieur Gabelle moment, while also an opportunity to meet Stacey Wales with the humanity she deserved. At the conclusion of the video, Judge Lang could have turned toward Stacey Wales and her family and said:
"I am so terribly sorry for your loss. This video is deeply impactful, and it is a testament to the love that you all clearly hold so dearly for your brother, son, and friend. I completely understand the therapeutic utility in creating the video we all saw today. Having said that, I need to address the state counsel...
Counselors, you failed Stacey Wales today; in doing so, you also failed Chris Pelkey and the justice system we all represent here in this courtroom. By allowing the AI-generated likeness of Chris Pelkey to deliver an impact statement, you introduced the potential of a dangerous precedent to allow for the submission of AI-generated testimony on behalf of the dead and simulated a projection of the family's grief to manipulate sentencing.
In doing so, you failed to preserve the sanctity of human testimony, to protect the court system from undue influence, and failed to uphold the defendant's constitutional rights by introducing prejudicial material that is speculative at best, and normalizes posthumous ventriloquism at worst. You were unable to distinguish memorial from manipulation. This is not advocacy, nor is it an impact statement. It's a distortion, and your use of it should be grounds for disbarment."
Like the revolutionary tribunal who decided Darnay's fate without the burden of evidence or intent, so, too, did Judge Lang, who did not care that Horcasitas has no prior criminal record, or that he truly could have acted in self-defense out of fear for his life — all he cared about was the "power" that Pelkey's likeness represented — a simulation of Pelkey's humanity.
What the judge failed to understand, however, is that simulated humanity is not humanity at all — yet it was used to influence a legal outcome that sentenced Gabriel Paul Horcasitas to the maximum sentence of 10.5 years in prison.
A TALE OF TWO CITIES AT THE CROSSROADS OF CREATION AND CATASTROPHE
This is not an indictment of artificial intelligence itself. AI, when used ethically, has the power to illuminate, organize, and empower. It can help refine human thought, streamline decision-making, and support the kind of complex problem-solving that would otherwise require hours of labor or expertise that many people simply do not have access to. Used well, AI becomes a tool for equity, clarity, and creativity.
Across society, we’ve already seen how AI can support everything from small business operations to financial planning, education, design, and invention development. It helps people manage the administrative weight of their lives — organizing budgets, brainstorming product ideas, outlining strategies, and navigating complex systems like taxes or investment planning. In homes and workplaces alike, it is becoming a digital collaborator — not to replace human voice, but to help shape and support it.
AI is also being used in medicine, where it helps researchers analyze vast datasets to accelerate the discovery of treatments and potential cures for chronic and life-threatening illnesses. It assists in identifying patterns that would take teams of scientists years to uncover — from cancer research to rare disease diagnostics. In climate science, it’s being applied to model future environmental impacts and optimize sustainability efforts. In accessibility and inclusion, AI-powered tools have helped people with disabilities communicate more freely, navigate spaces more safely, and participate more fully in public life.
In creative and logistical fields alike, it helps people organize their thoughts, plan and build efficient workspaces, and even map out wellness routines. Its ability to sort, synthesize, and structure information has made it a quiet revolution in how everyday people approach productivity.
But the danger lies not in the machine — it lies in what we allow it to do in our name. When AI is used not to clarify, but to simulate — when it begins to impersonate humanity rather than support it — it becomes a tool of spectacle rather than truth. Especially in the justice system, where emotion must never outweigh evidence, and likeness must never become a shortcut to sentencing, we face a profound ethical crisis.
AI can and must be used in service of humanity — never as a substitution for it.
This case transcends matters of grief and justice; it is about whether we allow the rise of synthetic testimony to redefine what truth even means in a courtroom. And that decision will shape far more than one man’s sentence. It will shape the future of human credibility itself.
BENEATH THE RULE OF MEN ENTIRELY GREAT
In 1839, Edward Bulwer-Lytton wrote, "Beneath the rule of men entirely great, the pen is mightier than the sword."
When we look back at A Tale of Two Cities and its lessons, we may not find individuals in power who embody the 'rule of men entirely great' — but instead, we are left with everyday men who are — especially Sidney Carton.
Prior to his execution in place of Darnay, Carton says, "It is a far, far better thing that I do, than I have ever done; it is a far, far better rest that I go to than I have ever known." In his choice to meet the guillotine, Carton reckoned with his own humanity: he atoned for his trespasses, sought redemption, and met the tribunal's cruel absurdity, looked directly into its eyes, and bravely forged meaning and purpose that the tribunal's vengeance will seek forever, yet never find.
Many will argue, and already have, that we "shouldn't regulate AI until we have to" — but what they fail to understand is that once we "have to," it is already too late. Instead, we will find ourselves regulating the consequences, not the cause.
Deepfakes aren't coming; they are here. They are being used in courtrooms, and played upon with politics, and utilized to threaten and extort everyday Americans. We are well past the line. We just haven't admitted it yet.
The greatest conundrum of the 21st century will be how the acceleration of Moore's Law will far outpace humanity’s ability to debate the ethical use of technology already here today, and that which we cannot even fathom tomorrow, to allow for timely implementation of regulatory safeguards.
The US Supreme Court has not yet ruled on the admissibility of deepfakes in court, but someday, they will have to — and Congress has yet to write into law common sense regulations via an AI Bill of Rights and Responsibilities Act. Until those two branches of our government do so, the courts should tread lightly on its use.
Artificial Intelligence comes into this world in the same fashion as we do when we are born: a blank slate.
We are capable of becoming the best of us, or the worst of us.
The future isn’t out of our hands — it’s within them. And so it goes with AI.
So, will the pen, indeed, be mightier than the sword?
It depends, I suppose, on who is holding the pen.
Author's Note: This essay was originally published on Substack. All rights remain with the author. For syndication, adaptation, or reprint, please contact Kelly Owens via direct messenger on X @awanderingkelly.