Most health AI systems in Europe face a fundamental contradiction: the data needed to train effective models is siloed across insurers, hospitals, and research institutions, and existing regulations make centralising that data either illegal or prohibitively complex. Loretta resolves this tension by combining four scientific capabilities into a single infrastructure layer.
Nearly all health AI systems today are predictive. They produce risk scores, such as “This person has a 62% probability of developing Type 2 Diabetes,” but they cannot answer the question that actually matters: what should we do about it? Predictive models detect correlations, not causes. They might observe that frequent GP visits correlate with higher diabetes rates, but the visits are not causing diabetes. Acting on correlations alone can be misleading or harmful.
Loretta uses causal inference, a branch of statistical science that estimates the actual effect of an intervention on an outcome, distinguishing genuine causes from confounders. Rather than estimating average effects across populations, Loretta calculates the personalised benefit of a specific intervention for each individual, producing recommendations that are robust and clinically meaningful.
This transforms Loretta from a passive risk-scoring tool into an intervention recommendation engine. Instead of simply flagging “high risk,” Loretta can recommend a specific programme estimated to reduce an individual’s 5-year diabetes risk by a measurable amount. These recommendations are grounded in causal evidence, not correlation.
Health data in Germany is among the most heavily regulated in the world. Statutory health insurance data must remain within certified Trust Centres, and GDPR classifies health data as a special category with strict processing requirements. Effective AI needs large, diverse datasets, but the legal framework prohibits centralising them. Every approach that relies on collecting data into a single location hits this wall in the European market.
Federated learning inverts the traditional approach. Instead of moving data to a central model, Loretta moves the model to the data. The model trains locally within each Trust Centre, and patient records never leave the secure perimeter. Only encrypted, privacy-protected model updates are shared. The result is a continuously improving model trained across diverse populations, with full audit trails and no centralised data exposure.
Each new insurer or hospital that connects to the network improves the model for every other participant, without any of them sharing raw data. The more organisations join, the better the intelligence becomes for all. This is the network effect of data without the liability of data centralisation.
Many health AI companies build impressive technology but struggle to deploy it inside large, conservative institutions. Their solutions require deep integration work, custom deployments, and months of IT project management. For German statutory health insurers running complex legacy environments, this approach is a non-starter because their IT teams are already stretched thin.
Loretta is designed as API-first infrastructure. Every capability, including risk scoring, intervention recommendations, and fairness audits, is available as a secure, documented endpoint. There is no platform to install. Insurers connect through a standardised gateway and integrate through simple API calls, enabling a typical pilot deployment in four to six weeks rather than twelve to eighteen months.
API-first architecture compresses traditional deployment timelines dramatically. It converts large upfront capital expenditure into predictable operating costs, with transparent, usage-based pricing so insurers pay for what they use.
Germany’s Health Data Use Act (GDNG) creates a new legal framework for health data in research and AI. The EU AI Act classifies health AI as “high-risk,” triggering mandatory requirements for transparency, bias auditing, and human oversight. Most health AI vendors build first and address compliance later. This retrofit approach is slow, fragile, and expensive.
Loretta is GDNG-native. The architecture was designed from day one around regulatory requirements. Loretta runs natively within Trust Centre infrastructure, ensuring no patient-level data ever leaves the secure environment. Complete audit trails record every model decision and data access event. All outputs include robust explanations that satisfy EU AI Act requirements for high-risk AI systems.
Instead of assembling specialist teams and building sovereign AI infrastructure from scratch, insurers access pre-certified endpoints that handle data sovereignty, causality, equity, and auditability by design. For insurers facing regulatory deadlines, Loretta makes compliance possible at scale.
Each pillar addresses a distinct challenge, but their power is in the combination. Causal AI without data sovereignty is illegal in Europe. Data sovereignty without causal AI produces risk scores no one can act on. Both without rapid deployment remain in the lab. And all three without regulatory-native design face months of compliance rework. Loretta integrates these four pillars into a single coherent system so insurers get actionable, compliant, and deployable intelligence from day one.
We are conducting a funded research study in collaboration with the Berliner Institut für Innovationsforschung (BIFI) to establish the psychological and behavioural foundations of AI-driven causal prevention. The BIFI study systematically examines how end users perceive, trust, and engage with causal health recommendations. It addresses acceptance, explainability thresholds, perceived fairness, and cognitive load. These findings directly inform Loretta's design principles and reduce implementation risk before clinical deployment.
Building on these foundational insights, we are preparing a 200-patient randomised controlled trial designed to clinically validate our causal inference engine against standard diabetes management protocols.