The use of AI in healthcare, such as generative tools like large language model (LLM) chatbots, has grown rapidly in over the past two years. Nearly all physicians have now tried LLM chatbots, such as ChatGPT, Microsoft Copilot, and others. Despite this strong growth, most physicians still use these tools for simple tasks like searching information.
Physicians continue to believe that AI and generative tools will have a positive impact on their medical practice in the future. These technologies are helping streamline information gathering, support their treatment selection, confirm diagnosis, and increase productivity. However, a majority of physicians continue to express concern about patients using AI tools for self-diagnosis.
Indeed, compared to one year ago, many doctors say they now spend more time correcting patient misconceptions that stem from AI chatbots and LLM-generated medical content. The growing accessibility of LLM chatbots has empowered patients to seek information more independently, but it has also increased the spread of misinformation that physicians must address in consultations.
As AI in healthcare continues to evolve, transparency, data security, and responsible use will be key to building trust among both physicians and patients. The future of medical AI lies not only in technological innovation but in how these AI assistants and generative models are integrated safely and effectively into everyday clinical practice.
Infographics may contain select findings from our Independent Studies. Contact us to find out if your organization qualifies for a complimentary presentation with access to the full report.
Ready to explore how data-driven insights can accelerate your pharmaceutical research? Connect with our team to discuss your goals.
We use cookies to collect information to help us personalize your experience and improve the functionality and performance of our site. By continuing to use our site, you consent to our use of cookies. For more information, see our Cookies Policy.