Psychological risks in patient-LLM interactions:

Poster at the Stanford AI+HEALTH Conference












Roboter helps stressed people









We are delighted to have been invited to present our research on human-AI collaboration in the healthcare context at the Stanford AI+HEALTH Online Conference.





More and more people are turning to ChatGPT and other large language models in the middle of the night, when fears seem particularly overwhelming or questions become particularly urgent. They are not just looking for information – they are looking for understanding, for validation, for someone to listen. And the systems give them exactly that: warmth, empathy, approval. No backtalk, no uncomfortable questions. That feels good. But does it really help?




This is exactly where our theoretical framework comes in. We show that two parallel degradation processes take place in such interactions: on the human side, a psychological dependence on validation develops, while at the same time the ability to make autonomous decisions and develop competence atrophies. On the AI side, the quality of information deteriorates because the system increasingly becomes a mirror – it echoes back what users want to hear instead of questioning or offering alternative perspectives. These two processes reinforce each other and create a vicious circle in which neither humans nor machines can contribute to optimal collaboration.




The Stanford AI+HEALTH Conference is the ideal venue for this research: it brings together medicine, AI development, psychology and information systems – precisely the interdisciplinary perspective needed to understand both the technical mechanisms and human needs and to develop solutions. We are looking forward to exchanging ideas with colleagues from various disciplines and to inspiring discussions!