
By Alex A.G. Shapiro
After testing more than 25 AI headshot generators, Berkeley Law J.S.D. candidate Mahwish Moazzam found that every one removed her hijab from the generated images, raising new questions about AI bias, religious expression, representation, and human dignity in algorithmic systems.
When Mahwish Moazzam uploaded a selfie to an AI headshot generator, she expected what the ads promised: a polished professional portrait.
Instead, the software removed her hijab — a visible expression of religious identity.
At first, she assumed it was a glitch.
She tried another app. The same thing happened.
Then another.
To see whether the issue was isolated or systemic, Moazzam uploaded selfies to more than 25 widely used AI headshot generators over the course of a year, including retesting several months later to see whether the results changed. Every one removed her hijab from the generated images. Only two apps produced mixed results, showing distorted or incomplete head coverings — “a piece of paper somewhere at the head,” she says.
The pattern raised a broader question: whether widely used AI image tools may be systematically altering visible expressions of religious identity.
“Some of the apps even prompted users to decide whether they wanted to keep accessories such as glasses,” Moazzam says. “None asked whether the hijab should remain.”
For Moazzam, a lawyer and legal scholar studying artificial intelligence and human rights, the experience pointed to a deeper issue. If AI systems can erase visible markers of religious identity, what does that mean for discrimination law, human dignity, and representation in digital spaces?
“Traditional anti-discrimination law was designed for identifiable human decision-makers,” Moazzam says. “Now we must ask how those laws apply when the decision-maker is an algorithm, where intent cannot easily be established and outcomes are difficult to explain.”
Her observations highlight a form of algorithmic bias that has received little attention, even as debates about AI discrimination have focused on facial recognition systems, hiring tools, and image generators. Despite the rapid spread of AI image tools, the removal of religious markers such as the hijab has rarely been examined as a potential issue of religious discrimination or identity distortion.
Because the apps are publicly available, Moazzam says other researchers and journalists could easily replicate the test to see how AI image systems handle visible markers of identity such as religious dress.
From Pakistan to Berkeley Law
The discovery emerged from Moazzam’s broader research on artificial intelligence and human rights at Berkeley Law, where she first arrived in 2019 to pursue an LL.M. degree with a specialization in international law.
Before coming to Berkeley, she spent many years teaching law in Pakistan, including courses in comparative constitutional law, tort law, and jurisprudence. Her work focused on human rights, constitutional governance, and questions about how law and state institutions respond to injustice in practice.
Moazzam was drawn to Berkeley Law for its strength in international law and its strong tradition of comparative and interdisciplinary legal scholarship addressing complex global issues. After completing the LL.M., she worked briefly at the Law Offices of Vernon C. Goins in Oakland before returning to Berkeley to pursue a J.S.D., the most advanced law degree offered by U.S. law schools and a path for scholars pursuing academic careers.
Her doctoral research examines how legal systems translate human rights commitments into meaningful protection in practice, using domestic violence legislation in Pakistan as a case study. More broadly, her work sits at the intersection of technology, human rights, and international law — areas Berkeley Law faculty have been examining for years — exploring how emerging technologies such as artificial intelligence pose new challenges for law and governance.
Faculty perspective
“Mahwish Moazzam’s research is a superb illustration of our institutional core values,” says Berkeley Law Professor Laurent Mayali, faculty director of the Robbins Collection Research Center. “It reflects a vision of what law is meant to do for people, particularly as new technologies and social media raise challenges to personal identity and individual rights.”
Professor Kathryn Abrams, Moazzam’s J.S.D. supervisor, says the research highlights how rapidly emerging technologies can raise new questions about identity, dignity, and equality. She adds that the project is typical of the curiosity, persistence, and insight that have fueled Mahwish’s path through her graduate work. “When Mahwish encounters a fact situation that surprises or puzzles her – be it a case, a political outcome, or an unexpected AI version of her own headshot – she follows the factual clues until she can begin to generate challenging conceptual hypotheses.”
“Mahwish Moazzam’s research asks all of us to wake up and understand how AI can affect human dignity and human rights,” adds Professor Eric Stover, co-faculty director of the Human Rights Center at Berkeley Law. “As Henry David Thoreau once wrote, ‘It is not what you look at that matters, it’s what you see.’”
Q&A with Mahwish Moazzam
What brought you to Berkeley Law?
My academic interests have long focused on the intersection of comparative constitutional law and human rights. While teaching law in Pakistan, I developed a strong interest in examining how law and state institutions respond to injustice in practice, and I wanted to explore these questions in a broader global and comparative context.
That interest is what brought me to Berkeley Law for my LL.M., where I specialized in international law. Berkeley has one of the strongest international law programs in the world. Also, its strong tradition of comparative and interdisciplinary legal scholarship made it an ideal place to pursue these questions.
My experience during the LL.M. was intellectually transformative. It made me realize that many of the questions I was asking about how legal systems translate human rights commitments into real protection required deeper research. I was also interested in how law adapts when new forms of harm challenge existing frameworks. The J.S.D. was the natural next step.
Berkeley gave me the institutional home, mentorship, and scholarly community to pursue that work seriously. As a first-generation university student from Pakistan, the path from LL.M. student to doctoral researcher at Berkeley is deeply meaningful to me.
What is the focus of your research on AI and human rights?
My broader research examines how legal institutions operate in practice and how legal systems allocate responsibility when new legal and technological challenges arise.
When we look at artificial intelligence, a key question emerges: who is responsible when AI systems generate outcomes that raise questions of discrimination and legal accountability? AI systems involve many actors: companies that create training datasets, developers who build models, platforms that deploy them, and end users who interact with them. Because so many actors are involved, responsibility becomes legally complex.
This becomes even more complicated internationally. A developer may be in California, the app may be distributed worldwide through an app store, and the person experiencing harm may be in another country. Understanding accountability in these cross-border situations is one of the major challenges for law and policy today.
What prompted your study of AI headshot applications?
It started very casually. I kept seeing advertisements on social media for AI headshot applications that turn casual photos and selfies into professional headshots. Out of curiosity, I tried one. The images looked very professional, but my hijab had disappeared. At first I thought it might be a glitch. But when the same thing happened across multiple apps, I began to suspect it reflected a deeper issue.
Over the course of about a year, I tested around 25 different AI headshot applications. Every one removed my hijab from the generated images. Some of the apps even asked whether users wanted to keep accessories like glasses. None asked whether the hijab should remain.
For Muslim women, the hijab is not simply a piece of clothing. It is a visible expression of religious identity and autonomy. If an AI system systematically removes it, the problem goes far beyond aesthetics. It raises serious concerns about identity distortion, religious discrimination, and the possibility that automated systems may quietly erase visible markers of identity in digital spaces without people realizing it.
Why should lawyers and policymakers care about this?
This research matters because it shows that AI systems can alter visible religious identity, reproduce discrimination at scale, and create accountability gaps that existing legal frameworks are not yet prepared to address.
First, it shows that AI can distort identity, not simply misidentify people or misclassify data. When a system removes a hijab, it alters how someone appears in a digital environment.
Second, it shows how AI systems can reproduce discrimination and exclusion in subtle ways. These systems learn from large datasets, and when certain identities are underrepresented in those datasets, the systems may quietly reproduce those biases in their outputs.
Third, it raises a serious accountability problem. If a tool removes a hijab, who is responsible? The developer, the company that built the application, the dataset provider, the platform distributing the app, or no one at all?
Finally, these harms are often transboundary. A system may be developed in one country, distributed globally through an app store, and used by someone in another country.
The question for law and policy
As AI image tools spread across social media and professional platforms, Moazzam believes the legal system will increasingly face questions like the one her experiment uncovered: what happens when algorithms quietly reshape visible expressions of identity.
“Every day we see new examples of AI harm,” she says. “The real question is whether our legal systems are ready to recognize those harms and respond to them.”