ChatGPT in the healthcare sector - opportunities and risks

ChatGPT initially appears inconspicuous and unspectacular. It is a kind of chatbot. This "intelligent" variant apparently knows the answers to many medical questions. Around one in four of Generation Z is familiar with this application.

From Prof. Dr. Volker NürnbergProfessor for Management in Healthcare at the Allensbach University of Applied Sciences

ChatGPT was created by OpenAIa Californian AI research company founded by the dazzling personality Elon Musk, among others. The chatbot works on the basis of machine learning and neural networks. The data used to ChatGPT mainly come from internet sources such as Wikipedianews sites and portals for scientific articles. But they are no longer up to date. This makes the machine dangerous given the fast and short innovation cycles in the healthcare sector. Experts explain the hype surrounding the application primarily by its ease of use. It brings back memories of when Apple first emerged: for the first time, laypeople around the world were able to interact with AI without in-depth PC knowledge. The user interface of ChatGPT was as easy to use from the start as established web applications such as Google or WhatsApp. It is therefore no surprise that more and more people ChatGPT also in connection with their health. Before they visit a doctor, they first ask questions to ChatGPTto find out what illnesses they might have. This is low-threshold, anonymous and free of charge.

ChatGPT as a good alternative?

Unfortunately, in Germany you have to wait a very long time for an appointment with a specialist, especially if you have statutory health insurance. Between weeks and many months. The simple, but only 80 percent valid ChatGPT option seems like a good alternative. In such cases, it could be a good idea to use ChatGPT receive a diagnosis and follow the therapies based on it. Some patients prefer to entrust health issues to a chatbot, as they feel less observed or judged than when visiting a doctor. This is particularly true for illnesses that are still associated with shame, such as mental illnesses or sexually transmitted diseases. The option of receiving a diagnosis from home is more discreet in such cases.

AI with deficits in the recommendation of support services

Despite the advantages mentioned, however, caution is advised when it comes to the use of ChatGPT in the healthcare context. One obvious disadvantage is that the chatbot cannot carry out physical examinations, take blood samples or take an X-ray, nor can it conduct a dialog-oriented medical history interview. Another weakness of ChatGPT is that it refers too rarely to offers of help. The AI is particularly deficient when it comes to recommending support services from professional organizations such as emergency hotlines. According to the research results, only around 20 percent of responses referred to such resources. Another problem to be considered when using ChatGPT for patients are security and data protection aspects, particularly in connection with the storage of sensitive health data. Security concerns in connection with the use of ChatGPT can occur if patients entrust the chatbot with confidential information about chronic illnesses, among other things. If this data is not adequately secured, there is a risk of misuse or unauthorized disclosure to third parties.

For laymen ChatGPT often like a black box

Another problem is the use of outdated information. The current ChatGPT-4 version is currently based on knowledge from 2021 and therefore cannot answer questions about current medical developments. In addition, missing source citations can be problematic in connection with use for diagnoses in a medical context, as patients sometimes have difficulty verifying the credibility of the information provided. Without clear verification options, it remains unclear which sources or bases were used for the information provided. For laypersons ChatGPT They often act like a black box into which they enter requests without knowing exactly what is happening behind the scenes - and then receive information without being able to understand the exact origin or the underlying principles.

Mandatory labeling for AI-supported doctor's letters

To summarize ChatGPT The health insurance system offers fast, simple, round-the-clock help with health issues, but there are also risks associated with it, including inaccurate or even incorrect diagnoses, inadequate referrals to help services and security and data protection concerns due to the possible use of outdated information. Some countries have therefore temporarily banned it. In Germany, on the other hand, those responsible reject a ban. Instead, they are calling for strict regulation and monitoring of the use of AI, particularly in the medical field, in order to ensure transparency and safety. Federal Health Minister Karl Lauterbach recognizes the opportunities, but also the dangers of AI chatbots such as ChatGPT in the healthcare system. It is necessary to introduce mandatory labeling or other regulations in order to make it transparent to patients when, for example, a doctor's letter is written with the help of ChatGPT or other AIs. In the future, however, the best results will be achieved through combinations of AI and real doctors. New jobs will also be created at the interface between humans and machines. The interaction between humans and machines can mean a quantum leap for the quality of medicine.

Request information now free of charge!

You can find all the important information about your degree program in your personal study guide.

standpunkte
en_USEnglish