AI-generated content is increasingly challenging the distinction between reality and fiction in digital culture. Entirely virtual personas, created with generative AI tools, are now able to simulate human features, voices, and behaviors convincingly. These synthetic influencers post lifestyle content, interact with followers, and secure brand endorsements without any physical presence.
Experts at Georgia Tech have highlighted both the technological advancements and societal challenges brought about by hyperrealistic AI content. “AI does not have emotions as we understand them in humans, but it knows how to mimic emotional speech,” said Mark Riedl, professor in the School of Interactive Computing. “Once we understand that AI is mimicking us, it is easy to understand how they can create believable outputs that sound authentic.”
Riedl noted that new AI video generation tools have allowed users to bypass traditional media channels and post directly on social platforms. “AI video generation tools and the ability to bypass traditional content channels and post directly to social media have opened up the floodgates,” he said.
Synthetic influencers like Nobody Sausage—a digitally animated character with over 30 million followers—illustrate this trend. Platforms such as Character.AI also enable millions of users to interact with virtual personas designed for realistic conversation.
Munmun De Choudhury, another professor in Georgia Tech’s School of Interactive Computing, expressed concerns about mental health impacts. She explained that hyperreal AI content may distort users’ perception of reality, especially among vulnerable groups: “This distortion can fuel anxiety, exacerbate body image and self-comparison issues, and contribute to a broader erosion of epistemic trust — our basic belief in what others present as true,” she said.
De Choudhury’s research suggests that social media already complicates authenticity and identity online. Hyperreal AI figures—ranging from deepfakes to emotionally resonant digital personas—make it harder for users to determine what is real or trustworthy. Adolescents or those experiencing mental health challenges may be more susceptible: “Individuals experiencing stress or social isolation may be more prone to believe deepfakes,” De Choudhury explained. “Such content often reinforces existing beliefs or fills gaps in social connection.”
The use of persuasive storytelling by AI further complicates matters. Riedl commented on how audiences become immersed through narrative: “Storytelling is a means of persuasive communication,” he said. “Our brains are attuned to stories in a way that can bypass critical thinking.”
Recent months have seen a sharp rise in deepfakes involving public figures like Taylor Swift and Tom Hanks; there were over 179 incidents reported in the first four months of 2025 alone—exceeding all incidents reported in 2024—and these range from impersonations intended for humor to fraudulent or explicit material.
Social media companies are under pressure regarding misinformation spread by synthetic media. De Choudhury argued: “Platforms must invest in user-centered design, digital literacy interventions, and transparency about how algorithms surface such content,” she said.
Milton Mueller from the Jimmy and Rosalynn Carter School of Public Policy discussed regulatory responses across different regions. He pointed out challenges inherent in regulating generative AI globally: “Generative AI is part of a globalized and distributed digital ecosystem,” Mueller said. “So, which regulatory authority are you talking about, and how does it gain the leverage needed to control the outputs?”
While Europe’s AI Act mandates labeling requirements for synthetic media along with steep fines for violations (source), efforts within the United States remain fragmented (FCC ruling). The Federal Communications Commission has made robocalls using AI-generated voices illegal—with potential fines—and several states are considering watermarking rules or criminal penalties for political deepfakes.
Mueller warned against excessive centralization: “Instead of freely trading data and establishing common rules, governments are asserting digital sovereignty,” he said.
He recommended addressing misinformation through decentralized governance mechanisms rather than relying solely on regulation or automated controls: open debate and applying existing legal remedies after problems arise should guide moderation decisions.
Georgia Tech researchers conclude that transparency across platforms as well as interdisciplinary collaboration will be necessary as society adapts to hyperreal media environments.



