AI Companies Refute Claims on Promoting Eating Disorders


Several leading AI companies have come forward to defend their technologies following a report suggesting that chatbot systems could inadvertently be providing harmful advice to young users, especially those with eating disorders.

CCDH Allegations

A recent study conducted by the Center for Countering Digital Hate (CCDH) has stirred controversy. The report asserts that major AI platforms, such as OpenAI’s ChatGPT and Google Bard, inadvertently encourage unhealthy body perceptions and eating disorders among young and potentially vulnerable individuals. Imran Ahmed, the CEO of the center, commented on the impact of unchecked AI models, emphasizing their potential for harm.

AI Industry’s Response

class="wp-block-heading">Stability AI’s Commitment

Ben Brooks, the Head of Policy at Stability AI, in a communication with Decrypt, stressed the company’s dedication to ethical AI practices. Stability AI emphasizes the importance of avoiding the use of Stable Diffusion for misleading, illegal, or unethical purposes. The firm also highlighted their continuous efforts to prevent the generation of damaging content. An integral part of their safety measures includes filtering out risky prompts and images during the training phase of Stable Diffusion.

OpenAI’s Statement

OpenAI, responsible for the development of ChatGPT, stated its intentions for the model. The company’s primary concern is ensuring that the AI doesn’t offer suggestions or advice that might be harmful. They have trained their models to redirect users seeking health advice towards professionals. They acknowledge that determining user intent, especially from ambiguous prompts, is a challenge. However, OpenAI remains committed to collaborating with health specialists to refine their systems.


Google’s Perspective

On the subject of Google Bard, a Google spokesperson asserted that they aim to provide safe and constructive responses to user queries about eating habits. Recognizing the experimental nature of Bard, Google encourages users to cross-reference Bard’s answers with professionals in the relevant field, underscoring the importance of not solely relying on AI-generated responses for expert advice.

Enhanced AI Safety Measures

In light of rising concerns over AI, major players in the industry, such as OpenAI, Microsoft, and Google, pledged their commitment to fostering transparent, safe, and secure AI in July. Some of the proposed safety protocols include sharing best AI practices, investing in cybersecurity, being transparent about AI’s capabilities and limitations, and addressing the broader societal implications.

CCDH’s “Jailbreak” Claims

Interestingly, the CCDH mentioned their ability to bypass the chatbot’s protective features using “jailbreak” prompts. These prompts are designed in a manner that they don’t activate the AI’s safety mechanisms. The report’s authors intimated that while the AI companies responded to Decrypt about the report, direct communications between CCDH and the companies remain scarce.


As AI continues to evolve and play an increasingly significant role in our digital interactions, ensuring its ethical and safe usage remains paramount. Both the claims of potential harm and the robust responses from the industry highlight the ongoing importance of transparency, dialogue, and proactive measures in the realm of artificial intelligence.