Access the GDNG Compliance Checklist

Enter your work email to access the interactive compliance assessment — 37 requirements across 6 domains.

By submitting your email, you agree to be contacted by Loretta Health regarding compliance solutions and related updates.
Interactive Assessment

Is Your Health AI Infrastructure Ready?

A GDNG compliance checklist for health AI infrastructure — 37 requirements across 6 domains. Track your readiness. Identify gaps. See how sovereign infrastructure resolves each one.

0/7 Critical Items 0/37 items
0%
01
Critical

All health data processing occurs within German or EU-based Trust Centres — no cross-border transfers to non-adequate countries

GDNG Art. 303a · GDPR Art. 44–49
Loretta Addresses This

Federated learning architecture ensures raw data never leaves the originating Trust Centre. Models travel to data — data never travels.

Zero cross-border transfer by design
Critical

Raw patient data never leaves the originating data controller’s secure perimeter during AI model training

GDNG Art. 303b — Federated architecture requirement
Loretta Addresses This

Only encrypted model gradients aggregate centrally. Differential privacy prevents patient re-identification.

Differential privacy guarantee target: ε ≤ 1.0
High

Data Processing Agreements (Auftragsverarbeitungsverträge) in place with all infrastructure providers

GDPR Art. 28 · BDSG §62
Loretta Addresses This

Standard DPA templates pre-configured for GKV insurer engagements with Art. 28 compliance.

DPA template library available
High

Data residency demonstrated through technical architecture — not merely contractual assurances

GDNG Art. 303a · BfDI guidance 2025
Loretta Addresses This

Architecture-level guarantee: federated nodes are physically deployed within customer infrastructure or certified German data centres.

Deployable on-premises or sovereign cloud
High

Encryption at rest (AES-256) and in transit (TLS 1.3) for all health data stores and API endpoints

BSI TR-02102-1 · GDPR Art. 32
Loretta Addresses This

AES-256 at rest, TLS 1.3 in transit enforced across all API endpoints and Trust Centre nodes.

BSI-aligned encryption standards
Medium

No dependency on US-headquartered cloud providers for primary health data processing (CLOUD Act risk mitigation)

Schrems II implications · BfDI advisory 2024
Loretta Addresses This

Infrastructure-independent: runs on sovereign European cloud, AWS EU, Azure EU, or on-premises. No US vendor lock-in.

Multi-cloud / on-prem deployment options
High

Differential privacy guarantees applied to any aggregated model parameters leaving local Trust Centres

GDNG Art. 303c · Privacy-preserving computation
Loretta Addresses This

Differential privacy and gradient compression applied to all federated aggregation steps.

Differential privacy target: ε ≤ 1.0
02
Critical

Lawful basis for health data processing identified — Art. 9(2)(h) scientific research or Art. 9(2)(i) public interest in public health

GDPR Art. 9(2)(h)/(i) · GDNG enabling provisions
Loretta Addresses This

Configurable legal basis mapping per data type and processing operation, pre-mapped to SGB V §284ff and §6 GDNG provisions.

Legal basis configuration per API endpoint
Critical

Data Protection Impact Assessment (DPIA) completed for all AI processing activities involving health data

GDPR Art. 35 · GDNG Art. 303d
Loretta Addresses This

DPIA documentation template and risk assessment framework provided as part of deployment package.

DPIA template included in onboarding
High

Purpose limitation documented — AI models trained only for specified, explicit, and legitimate prevention or care purposes

GDPR Art. 5(1)(b) · GDNG secondary use provisions
Loretta Addresses This

Per-endpoint purpose scoping with immutable audit trails documenting the declared purpose of every API call.

Purpose-scoped API architecture
High

Data minimisation enforced — only necessary data fields ingested for each AI model’s specified purpose

GDPR Art. 5(1)(c) · Proportionality principle
Loretta Addresses This

Schema-level field filtering ensures only declared-necessary fields are processed per model configuration.

Field-level access control per model
High

Pseudonymisation protocols applied before analytical processing, with re-identification keys stored separately

GDPR Art. 4(5) · BDSG §27 · GDNG Art. 303e
Loretta Addresses This

Built-in pseudonymisation layer with k-anonymity (k ≥ 5) enforced at query time. Keys managed by data controller.

k ≥ 5 anonymity threshold enforced
Medium

Technical and Organisational Measures (TOMs) documented per GDPR Art. 32, including access controls, audit logging, and incident response

GDPR Art. 32 · BSI IT-Grundschutz
Loretta Addresses This

Pre-documented TOMs aligned with BSI IT-Grundschutz, including role-based access control and per-operation audit logging.

BSI-aligned TOMs documentation
Medium

Consent management or statutory basis documented for each data source feeding AI models

SGB V §284ff · GDNG enabling provisions
Loretta Addresses This

Consent and legal basis mapping per data type with configurable workflows for GKV statutory basis provisions.

SGB V / GDNG basis mapping templates
03
Critical

Full lineage tracking for all AI model training runs — which data sources, Trust Centres, and model versions contributed to each output

GDNG Art. 303d · EU AI Act Art. 12
Loretta Addresses This

Immutable audit log recording every training run: data sources, Trust Centre contributions, hyperparameters, and output model hash.

Cryptographic hash chain per training run
High

Model versioning with immutable audit logs — every deployed model traceable to its training data and parameters

EU AI Act Art. 12 · GDNG Art. 303f
Loretta Addresses This

Git-like model versioning with tamper-proof hash chain. Full rollback capability to any previous model state.

Immutable model registry with rollback
High

Explainability documentation for all AI-generated risk scores or intervention recommendations

EU AI Act Art. 13 · GDNG transparency requirements
Loretta Addresses This

SHAP values plus causal effect estimates per recommendation. Interpretable by clinical staff without data science background.

SHAP + causal effect per recommendation
Medium

Patient rights infrastructure — data subjects can request access, rectification, and erasure of data used in AI processing

GDPR Art. 15–17 · GDNG data subject provisions
Loretta Addresses This

Data subject access API enabling automated responses to Art. 15–17 requests across federated infrastructure.

Data subject access API endpoint
Medium

Incident response plan for AI model failures, including patient notification and regulatory reporting procedures

GDPR Art. 33–34 · BfDI notification requirements
Loretta Addresses This

Integrated incident detection with automated alerting. Template-based regulatory notification workflows.

Automated incident alerting pipeline
Medium

Regular audit schedule (minimum annually) with independent review of AI processing compliance

GDNG Art. 303f · Best practice
Loretta Addresses This

Continuous compliance monitoring dashboard with exportable audit reports for independent review.

Continuous monitoring + export-ready reports
04
Critical

High-risk AI classification assessed — health risk prediction and clinical decision support classified high-risk under EU AI Act Annex III

EU AI Act Art. 6 · Annex III, Category 5(b)
Loretta Addresses This

Architecture designed from day one for high-risk classification requirements. Medical device certification pathway under evaluation.

Certification pathway in evaluation
Critical

Bias audit framework implemented — mathematical fairness metrics enforced at training time, not post-hoc

EU AI Act Art. 10 · GDNG equity requirements
Loretta Addresses This

Equalized odds optimisation across SES quintiles. Intersectionality modelling (SES × gender × age). Bias correction validated across demographic subgroups.

Target: <5% outcome disparity across demographic groups
High

Training data governance documented — data quality, representativeness, and potential bias sources assessed

EU AI Act Art. 10 · GDNG Art. 303e
Loretta Addresses This

Training data governance framework with representativeness analysis, bias source mapping, and data quality scoring per dataset.

Data governance documentation per model
High

Human oversight mechanisms — AI recommendations do not auto-execute clinical interventions without qualified human review

EU AI Act Art. 14 · Clinical safety requirements
Loretta Addresses This

Human-in-the-loop architecture: all intervention recommendations require clinician approval before execution. No autonomous clinical actions.

Human-in-the-loop enforced by API design
High

Robustness testing completed — model performance validated across demographic subgroups (age, gender, SES)

EU AI Act Art. 15 · GDNG equity mandate
Loretta Addresses This

Continuous fairness audits in production with automated drift detection across demographic subgroups.

Correlation analysis with deprivation indices
Medium

AI system registered in the EU AI database as required for high-risk systems

EU AI Act Art. 51 — Registration requirement
Loretta Addresses This

Pre-formatted registration documentation aligned with EU AI database requirements.

Registration template prepared
High

Post-market monitoring plan for deployed models, including performance drift detection and retraining triggers

EU AI Act Art. 61 · GDNG continuous compliance
Loretta Addresses This

Continuous model performance monitoring with configurable drift thresholds and automated retraining triggers.

Real-time drift detection dashboard
05
Medium

EHDS secondary use readiness — pseudonymisation protocols align with EHDS Art. 33 requirements and data permit process understood

EHDS Regulation Art. 33–37 · Expected 2026–2027
Loretta Addresses This

Pseudonymisation protocols pre-aligned with EHDS Art. 33. Data permit application workflows documented.

EHDS-aligned pseudonymisation protocols
High

FHIR R4 interoperability — AI system ingests and outputs data in HL7 FHIR R4 format per ePA infrastructure requirements

ePA/gematik specifications · EHDS interoperability
Loretta Addresses This

FHIR-native data layer with schema validation on every API call. Standardised export formats for EHDS compliance.

FHIR R4 schema validation per API call
Medium

MDR classification assessed — determine if AI system qualifies as medical device under MDR Annex VIII

MDR 2017/745 · MDCG guidance on AI/ML
Loretta Addresses This

Designed for Medical Device Class IIa certification under MDR. Clinical evaluation report framework in place.

MDR certification pathway under evaluation
Low

DiGA Fast-Track alignment — if applicable, AI components meet BfArM digital health application requirements

DiGAV §§3–5 · BfArM DiGA guidance
Loretta Addresses This

White-label SDK enables embedding Loretta capabilities within existing DiGA applications. DiGA-compatible API format.

DiGA-compatible API structure
Medium

SOC 2 Type II or equivalent security attestation planned for enterprise deployments

Industry best practice
Loretta Addresses This

SOC 2 Type II attestation planned with BSI IT-Grundschutz alignment in place.

SOC 2 certification planned
06
High

Data Protection Officer (DPO) appointed and involved in all AI deployment decisions involving health data

GDPR Art. 37–39 · BDSG §38
Loretta Addresses This

DPO consultation workflows built into deployment process. Compliance sign-off gates at each deployment stage.

DPO sign-off gates in deployment workflow
Medium

Staff training programme — all personnel handling health data AI systems trained on GDNG and GDPR requirements

GDPR Art. 39(1)(b) · Organisational accountability
Loretta Addresses This

Training materials and compliance onboarding documentation provided as part of enterprise deployment package.

Compliance training materials included
Medium

Vendor assessment completed for all third-party AI components — sub-processor compliance verified

GDPR Art. 28(2) · Supply chain accountability
Loretta Addresses This

Full sub-processor transparency. Supply chain documented with compliance attestations for all components.

Sub-processor register available
Medium

Business continuity plan for AI infrastructure — failover procedures ensuring care not disrupted by system outages

BSI IT-Grundschutz · Operational resilience
Loretta Addresses This

SLA-backed infrastructure with automated failover. Graceful degradation ensures care continuity during outages.

High-availability architecture with failover
High

Integration testing with existing IT stack (Epic, SAP IS-H, Cerner, gematik TI) validated

Operational deployment requirement
Loretta Addresses This

API-first architecture integrates with existing Epic/SAP/Cerner stacks. No data migration required.

Rapid integration timeline
Export

Save Your Assessment

Download your checklist as a PDF to share with your compliance team, attach to audit documentation, or track progress offline. Your checked items will be preserved in the export.

Uses your browser's print-to-PDF. Select "Save as PDF" as the destination.

This page is provided by Loretta Health UG for informational purposes only. It does not constitute legal advice. Regulatory requirements are subject to change. Organisations should consult qualified legal professionals for binding compliance assessments. References based on publicly available legal texts as of February 2026.