In an age where technology continues to infiltrate every facet of our lives, a new and disturbing trend is emerging on social media: AI-generated doctors promoting dangerous, and often completely fabricated, medical advice.
The worldwide proliferation of health-related misinformation and disinformation has grown exponentially. The posts are part of a global surge of frauds hijacking the online personas of prominent medical professionals to sell unproven health products or simply to swindle gullible customers.
These AI-created personas, which look and sound like real medical professionals, are leading users down perilous paths by recommending unproven products, natural remedies, and alternative treatments that might do more harm than good.
“While health care has long attracted quackery, AI tools developed by Big Tech are enabling the people behind these impersonations to reach millions online — and to profit from them,” wrote Steven Lee Myers, Alice Callahan and Teddy Rosenbluth in The New York Times in September. “The result is seeding disinformation, undermining trust in the profession and potentially endangering patients.”
The spread of these fakes has made standard advice about how to find good health information online suddenly feel outdated, said Dr. Eleonora Teplinsky, an oncologist who has found impostors on Facebook, Instagram and TikTok.
“This undermines all the things we tell people about how to spot misinformation online: Are they real people? Do they have a hospital page?” Teplinsky said in The New York Times. “How would people know it’s not me?”
Many videos blatantly spread falsehoods to advertise products. One YouTube video, titled “Natural growth injection: The secret to growing from 169 cm to 183 cm,” claims, “This ingredient doubles your growth hormone explosively.” Medical professionals warn that claims that specific ingredients drastically increase height are absurd.
One AI-generated bot portrayed a phony doctor on Facebook who claimed that “Chia seeds can help get diabetes under control.” The video garnered over 40,000 likes, was shared more than 18,000 times, and generated over 2.1 million clicks. There is no scientific evidence that chia seeds can cure diabetes or help get it completely under control.
On Sept. 5, the Chosun Ilbo newspaper, among the oldest active newspapers in South Korea, analyzed 3,731 health-related ads posted by three supplement companies on Instagram, Facebook, and YouTube and found that 695 cases (18.6%) involved fake doctors created by AI.
There are now hundreds of tools designed to re-create someone’s image and voice. Cybersecurity expert Joshua Copeland, an adjunct professor at Tulane, said it doesn’t take much to create a convincing digital clone.
“It’s truly very hard to protect yourself from this kind of deep fake impersonation. … To clone your voice, it only takes about 10 seconds worth of actual audio. From there, they can really replicate you in a way that is very, very hard to distinguish,” Copeland said.
AI-generated personalities can be manipulated on an app called Captions, which bills itself as a tool to generate and edit talking AI videos. The company claims that it has 100,000 daily users of the app, with over 3 million videos produced every month.
AI videos often show unnatural features like disappearing wrinkles, overly consistent voice tones, or mismatched lip movements. Be that as it may, it’s difficult for ordinary consumers to notice these details.
Currently, no social media service automatically filters such ads but they can remove flagged content; however, they are struggling to contain the spread of these ads on their platforms as bad actors constantly evolve their tactics to attempt to evade enforcement.
Copeland added that once AI content is posted online, “the genie is out of the bottle” — videos can be screen-recorded, shared, and re-uploaded to multiple platforms. The geographic range of AI fake doctors and quack medicines indicates it is a type of large, sophisticated operation that is increasingly becoming a threat to brands across the globe.
One campaign, which appeared to begin late last year, was capitalizing on the popularity of a class of drugs known as GLP-1s (such as Ozempic and Wegovy), which have transformed treatment of diabetes, obesity and related diseases. It pitched a product called Peaka, which appeared to be liquid capsules. (The only approved forms of the GLP-1s now available are injections.)
They are sold on here-today-gone-tomorrow websites with registrations in Hong Kong, but who exactly is behind them remains unknown. Despite its uncertain provenance and false marketing, the product was until recently available for purchase on major e-commerce platforms, including Amazon and Walmart, and appeared in Google searches as a sponsored product.
In addition to impersonating doctors, the marketing campaign for Peaka featured logos of regulatory agencies or advocacy groups in several countries, including Mexico, Norway, Britain, Canada and New Zealand, falsely implying the product received official approval.
Ultimately, these AI-generated quack doctors distort what fact is, making it harder for the public to believe anything that comes out of science, from a doctor, from the health care system overall. These fake doctors don’t just blur the lines between real and fake — they profit from the confusion.
Scammers are going hard into AI and deepfakes to push their snake oil and mislead people in a “hall of mirrors” and scammer trickery. Believing in fake doctors can lead to real harm. People may delay proper treatment, quit medication, or share private health concerns with scammers.
Deep-fake scammers are getting better at making phony impersonations look and sound real, but they’re counting on people not to look too closely. Social media shouldn’t be a primary source for medical advice. When it comes to our health, trust real doctors — not someone who showed up in a feed with a product to sell.
Dr. William Kolbe, an Andover resident, is a retired high school and college teacher and former Peace Corps volunteer in Tonga and El Salvador. He can be reached at bila.kolbe9@gmail.com.