×
google news

How a medical student used Google Gemini to build an AI MAGA persona

A 22-year-old medical student in India created an AI persona named Emily Hart to monetize conservative online fandom

How a medical student used Google Gemini to build an AI MAGA persona

The story begins with a student balancing demanding studies and financial pressure. A 22-year-old medical student from northern India, who asked to be identified only as Sam, developed an AI-generated influencer he called Emily Hart. He said the project started as a side hustle to pay for exam fees and to save for plans to emigrate, and it quickly evolved into a money-making operation.

The persona combined photorealistic images and politically charged captions aimed at conservative audiences. In his account, a small time investment—around 30 to 50 minutes a day—translated into thousands of dollars each month through subscriptions and merchandise sales, an outcome he described as unusually easy given his other options in India.

Sam did not have a background in American politics, so he studied the signals that resonated with right-leaning social feeds. He assembled a consistent identity for his creation: a blonde, attractive woman with a professional backstory and a firm stance on hot-button issues.

Using that template, he posted lifestyle images—ice fishing, beer-drinking, shooting ranges—paired with provocative captions designed to spark engagement from a specific digital constituency. The result, he says, was rapid amplification: reels and posts that reached millions of views and drew a following willing to pay for extra content.

How the persona was built

The construction of the online figure relied on readily available AI tools and platform strategies rather than bespoke artistry. Sam used Google Gemini for guidance about audience niche and to generate visual material, including options from model prompts like image-making tools. He told reporters that the system suggested the conservative niche as a way to stand out among generic ‘hot girl’ accounts. Following that advice, he created a backstory—registered nurse, Midwestern sensibility—and fed images and captions into social platforms. He also employed marketplace mechanics: limited-run shirts, branded messages, and a subscription feed on services that accept AI-generated content.

Role of AI platforms and creative prompts

According to transcripts and Sam’s account, the AI did more than render faces; it recommended an approach to maximize engagement. The assistant analysis indicated the conservative audience was potentially more loyal and had higher disposable income, prompting a targeted creative pivot. Sam then refined the persona with consistent themes—patriotism, faith, second-amendment support—and used those motifs across posts to cultivate a predictable brand. This tactical use of AI as a marketing consultant blurred lines between algorithmic insight and human intent, raising questions about the role of generative systems in shaping political attention economies.

Monetization and audience response

Monetary channels included a subscription service on a site that permits AI content and the sale of themed apparel. The student reported earning several thousand dollars per month from subscriptions and T-shirt sales tied to the persona. Followers engaged with posts that mixed sexualized imagery and political messaging, and many paid for “exclusive” material. Sam admitted he attempted a liberal counterpart but found it less effective—he attributed the difference to greater skepticism among liberal users about the authenticity of AI-produced accounts. He also used blunt language to describe the people who followed the conservative persona, saying they were easy to fool.

Platform actions and policy context

Social networks eventually removed some of the accounts. The Instagram page was disabled after being cited for fraudulent activity, and related pages on other platforms were also taken down. Platform operators have put in place tools to identify and label content created or edited by algorithms: for example, they display an AI info label when content appears to be synthetic and require creators to disclose photorealistic synthetic media in organic posts. Companies argue these measures increase transparency, but enforcement remains uneven and many accounts slip through moderation for extended periods.

Implications and ethical questions

This episode illustrates how generative AI can be used to manufacture persuasive personae that drive both engagement and revenue, leveraging psychological and cultural cues across borders. It raises practical concerns about disinformation, the vulnerability of audiences to fabricated authenticity, and the incentives that encourage such schemes. At the same time, it highlights gaps in platform governance, the economics of attention, and the need for clearer disclosure standards. Observers say the case is a reminder that technical capabilities and market incentives together can produce convincing but deceptive digital actors with real-world effects.

Final note

For editors and policymakers, the episode is a prompt to consider how tools designed to assist users can be repurposed for profit and persuasion. As the technology matures, debates about labeling, enforcement, and digital literacy will remain central to limiting misuse while preserving legitimate creative uses of AI.


Contacts:
Dr.ssa Silvia Moretti

Medical doctor and science communicator. All articles cite peer-reviewed studies.