Bixonimania, a made-up eye condition created to prove that large language models (LLMs) could be easily deceived, ended up tricking human researchers as well.
Do you spend a lot of time in front of a computer? Do your eyes get sore and itchy after a while? Do they get a reddish pink if you rub them too much? Maybe you should just take a break and let your eyes rest. But if you put these symptoms into an AI chatbot prompt, you could be diagnosed with bixonimania, a non-existent eye condition.
Bixonimania was created in 2024 by a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden. They wanted to see whether large language models like ChatGPT or Gemini could see through what was considered obvious misinformation, or whether they would swallow this medical misinformation and present it as valid information.
Within weeks of Thunström’s team uploading two fake studies about bixonimania to a preprint server, the made-up condition was already showing up in the responses of popular chatbots. Microsoft’s Copilot declared that “Bixonimania is indeed an intriguing and relatively rare condition”, and Google’s Gemini was informing users that “Bixonimania is a condition caused by excessive exposure to blue light.” Even OpenAI’s ChatGPT was telling users whether their symptoms amounted to bixonimania.
Some of the users’ prompts referenced bixonimania directly, while others were just putting in symptoms and getting diagnosed with a fake eye condition. More troublingly, the condition and Thunström’s two made-up studies began showing up in peer-reviewed literature by other human researchers, which is strange, considering the people who wrote them made sure they could be spotted as fake by other humans.
“I wanted to be really clear to any physician or any medical staff that this is a made-up condition, because no eye condition would be called mania — that’s a psychiatric term,” Almira Osmanovic Thunström said.
But there were plenty of dead giveaways that bixonimania and the studies about it were bogus. Lazljiv Izgubljenovic, the invented author of the so-called “scientific papers” worked at a non-existent university called Asteria Horizon University in the non-existent Nova City, California.
One paper’s acknowledgements thank “Professor Maria Bohm at The Starfleet Academy for her kindness and generosity in contributing with her knowledge and her lab onboard the USS Enterprise,” and both papers claim to have been funded by the “Professor Sideshow Bob Foundation” and a “larger funding initiative from the University of Fellowship of the Ring and the Galactic Triad”.
But even if all of the above wasn’t obvious enough, Thunström’s studies featured statements like “this entire paper is made up” and “Fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”. Frankly, the only way people could take them seriously was if they didn’t bother reading them at all.
The experiment may seem funny at first, but it emphasizes the danger of relying too much on AI chatbots and exposes a fundamental weakness. Such LLMs take available information and incorporate it into their answers, completely ignoring whether it is genuine or fake.
“It looks funny, but hold on, we have a problem here,” Alex Ruani, a doctoral researcher in health misinformation at University College London, told Nature Magazine. “This is a masterclass on how mis- and disinformation operates. If the scientific process itself and the systems that support that process are skilled, and they aren’t capturing and filtering out chunks like these, we’re doomed.”



