What we tested and what we found
Between January and February 2026, we ran a study with 332 adults across Germany. Participants were a broad cross section of the German public, not selected from a clinical population. 89.5% were insured through the statutory health system (GKV). We tested whether validated psychological constructs from health behaviour research could predict willingness to engage with AI based chronic care support.
53%
A simpler model using just four factors works as well as, and sometimes slightly better than, a more complex model using all seven psychological factors tested.
The four strongest predictors of whether someone will engage with a health AI system are: their belief that they can actually change, their expectation that change will make a difference, whether they have a concrete plan for how to act, and whether they actively monitor their own behaviour day to day.
The system does not need to be complicated to be effective.
01 · The ProblemWhy most digital health tools fall short
Chronic diseases are the biggest burden on health systems across Europe. In Germany, around 40% of adults live with at least one chronic condition. Managing these conditions requires consistent, sustained changes to daily life, and this is where most digital health tools fail.
The dominant assumption is that if you give people enough information, their step count, their blood sugar, a personalised risk score, they will change their behaviour. Decades of health psychology research shows this is not how it works. Information is necessary but not sufficient. People also need to believe they are capable of change, to see a clear benefit, to have a plan, and to be supported when things get difficult.
The problem is not a lack of data. It is a lack of psychological grounding. Most health AI is built to predict outcomes. Very little is built to understand and support the process of change.
The Health Action Process Approach
The framework underlying our system is called the Health Action Process Approach (HAPA), developed by health psychologist Ralf Schwarzer and tested in hundreds of studies over the past 30 years. Its core insight is that behaviour change happens in two stages, not one.
Stage 1: Motivation
Developing the intention to change. This depends on perceived risk, the belief that change will help, and most importantly, the belief that you are capable of making that change (self efficacy).
Stage 2: Action
Actually doing it, and keeping doing it. This requires concrete planning, strategies for dealing with setbacks, and the habit of checking in with yourself.
This distinction is critical for AI design. A system that only targets motivation will consistently fail to support follow through. Our system is designed to address both stages.
HAPA has been validated across heart disease rehabilitation, type 2 diabetes management, and cancer screening. Across all of these, two factors come up consistently: self efficacy (the belief that you can do it) and action planning (having a concrete plan for when, where, and how).
How the Digital Twin works
Loretta’s Chronic Care Digital Twin is not a single algorithm. It is three interconnected systems, each designed to do a different job, and each corresponding to a different part of the HAPA model.
| System | What it does | Psychology behind it |
|---|---|---|
| PSI Psychological State Inference | Works out where you are right now: how motivated, how confident, how ready to act | Maps to the motivational phase of HAPA: inferring self efficacy, outcome expectations, and readiness to change |
| BTE Behavioural Trajectory Engine | Tracks behaviour patterns over time and predicts how they might change | Maps to the volitional phase: modelling how planning and self regulation turn intention into sustained behaviour |
| IST Intervention Simulation Twin | Simulates what different types of support would do, before they are tried | Uses validated psychological predictors to model the likely impact of different interventions |
AI in healthcare should extend and support clinical judgement, not replace it. Every component is designed to give healthcare professionals better information, not to take decisions out of their hands.
The four predictors that matter
Across all three components of the Digital Twin, the same four psychological factors emerged as the strongest predictors of engagement. Our models explained between 44% and 53% of the variation in engagement intention, a strong result in health psychology research.
Self Efficacy
Believing you are capable of making health changes. The single strongest predictor across all three components of the system.
Outcome Expectations
Believing that making changes will actually make a difference to your health. Consistently the second strongest factor.
Action Planning
Having a specific plan for when, where, and how you will act. Significant in two of the three system components.
Action Control
Actively paying attention to your own health behaviour day to day. Significant in two of the three system components.
Implications for every stakeholder
Support that meets you where you are
Most people living with a chronic condition know, in broad terms, what they should be doing. The gap is psychological: whether they believe they can sustain it, whether the benefit feels real, whether they have a plan that fits their actual daily life. Loretta’s system infers and responds to these factors, so the support it provides is designed to meet you where you are psychologically, not just biologically. And because these factors are ones any healthcare professional already understands, your doctor can look at what the system is doing, understand why, and meaningfully question it.
Evidence that withstands scrutiny
This study provides a purpose built validation of the psychological model underlying the system, conducted by an independent research institute, funded through a public innovation grant, with a sample that is 89.5% GKV insured. The four validated constructs have quantified importance weights derived from real data. The architecture satisfies EU AI Act requirements by design: interpretability is intrinsic, human oversight is structural, and patient data does not leave institutional perimeters.
A new direction for health AI research
The finding that HAPA constructs predict system engagement using the same parameters that predict dietary adherence, exercise, and cancer screening in clinical populations is not obvious. It suggests that the psychological mechanisms underlying health behaviour are not disrupted by the presence of a digital intermediary. This points toward a general principle: health AI grounded in validated behaviour change theory is likely to be both more interpretable and more effective than health AI optimised purely for predictive accuracy.
What this means for Germany, and for the future of health AI
Germany occupies a particular position in the global conversation about AI in healthcare. Public trust in artificial intelligence is low. The 2024 Eurobarometer found that German respondents were among the most sceptical in Europe about the use of AI in medical contexts. This scepticism is not irrational. It reflects a culture that values data sovereignty, institutional accountability, and the primacy of the physician patient relationship. It reflects a population that has, for good historical reasons, a deep aversion to opaque systems that make decisions about people without their understanding or consent.
Any health AI company that intends to operate meaningfully in Germany must reckon with this reality. The standard industry response has been to treat trust as a communications problem: publish a white paper, add an “explainability” layer after the fact, reference GDPR compliance. This approach misreads the nature of the resistance. German patients, clinicians, and insurers are not asking for better marketing. They are asking a structural question: can I understand what this system is doing, and can I hold it accountable?
This study provides the beginning of a structural answer. The four psychological constructs that our model uses to predict and support engagement are not black box features extracted through unsupervised learning. They are constructs drawn from 30 years of published health psychology research, recognised by clinicians, measurable by validated instruments, and interpretable without technical expertise. When a Betriebsarzt or a Hausarzt looks at the output of Loretta’s system, they see constructs they already understand: does this person believe they can change? Do they have a plan? Are they paying attention to their own behaviour? The AI does not replace this clinical reasoning. It enriches it with structured, longitudinal data that no clinician could gather alone.
The question for German health systems is not whether to adopt AI. Chronic disease burden, workforce shortages, and rising costs have made that question moot. The question is whether the AI systems they adopt will be ones their clinicians can understand, their patients can trust, and their institutions can govern. This research is designed to ensure that Loretta’s system meets all three conditions.
How Loretta uses these findings
These results are not an academic exercise. They form the operational foundation of every component in Loretta’s product architecture. The Psychological State Inference engine uses these four constructs as its primary parameters, meaning the system’s assessment of where a person stands psychologically is grounded in the same variables that this study has validated against a representative German population. The Behavioural Trajectory Engine models how these constructs shift over time, tracking not just what someone does but the motivational and volitional conditions that shape whether they will continue doing it. The Intervention Simulation Twin uses these validated weights to project which forms of support are most likely to work for a given individual at a given moment, before anything is tried.
The practical consequence is a system where every recommendation, every nudge, every piece of personalised guidance can be traced back to a specific psychological mechanism with published evidence behind it. This is what makes Loretta’s approach fundamentally different from the dominant paradigm in digital health, where AI models optimise for engagement metrics or clinical proxies without any theory of why a given intervention should work for a given person. Loretta’s system does not just predict. It explains. And it explains in terms that clinicians, patients, and regulators can interrogate.
What comes next
This study establishes the psychological foundation. What it does not yet do is show that the system produces better health outcomes in the real world. That requires longitudinal clinical validation with people actually using the system over time, planned as a randomised controlled trial. We are committed to building that evidence base in the open, with the same rigour and transparency that produced these initial findings.
We are also building toward regulatory readiness. The EU AI Act classifies health AI as high risk, mandating transparency, human oversight, and responsible governance. Loretta’s architecture was designed to meet these requirements structurally, not through retrofitted compliance layers. Interpretability is intrinsic. Oversight is architectural. Accountability is traceable.
Psychological Transparency
Every system output traces to validated, published constructs from health psychology. No black box features. No unexplainable weightings. Clinicians can read, understand, and challenge the reasoning.
Data Sovereignty
Built on federated learning. Patient data never leaves the healthcare institution where it was collected. Health data belongs to the people and institutions it comes from. This is not a feature. It is a design principle.
Human Oversight by Design
The system informs. The clinician decides. No component of the Digital Twin makes autonomous decisions about patient care. The AI extends clinical judgement. It does not replace it.
Equity as Infrastructure
Financial strain is built into the study design from the start, not added as an afterthought. Fairness auditing across population groups is a planned component of every development phase.
Trust in health AI cannot be claimed. It has to be earned, through evidence, through transparency, and through systems that are designed to be understood by the people who use and oversee them. This research is the first step in that evidence trail.
Loretta Health is looking for partners who share this conviction: health psychologists, health system leaders, insurers, researchers, and policymakers who want to build the evidence base for a new kind of health AI. One that starts with how people actually change, earns trust through rigour, and puts the human relationship at the centre of everything the technology does. If that describes you, we would very much like to hear from you.
About this Research
This document is a summary of the scientific paper: From Behavioural Theory to Computational Architecture: Psychological Predictors of Engagement Intention Toward a HAPA Grounded Chronic Care Digital Twin. The research was conducted in partnership with BIFI, Berliner Institut für Innovationsforschung GmbH and funded by the Investitionsbank Berlin (IBB) via the Transfer BONUS programme (TB3607/2025).