AI Code of Ethics of the LEGIER and "SANDIC by LEGIER" Group
Table of contents
Note: If the automatic directory appears empty, please right-click in Word → "Update field".
- 1. preamble & scope of application
- 2. basic values & guiding principles
- 3. governance & responsibilities (AI Ethics Board, RACI)
- 4. legal & standards framework (EU AI Act, GDPR, DSA, copyright law, trade law)
- 5. risk classification & AI Impact Assessment (AIIA)
- 6. data ethics & data protection (legal basis, DPIA, cookies, third country)
- 7. model & data lifecycle (ML lifecycle, data cards, model cards)
- 8. transparency, explainability & user instructions
- 9. human-in-the-loop & supervisory duties
- 10. security, robustness & red-teaming (prompt injection, jailbreaks)
- 11. supply chain, human rights & fair labor (Modern Slavery, LkSG-analog)
- 12. bias management, fairness & inclusion (vulnerable customers, accessibility)
- 13. generative AI, proof of origin & labeling (C2PA, watermark)
- 14. content, moderation & DSA processes (reporting, complaints, transparency)
- 15. domain-specific use (news, data, health, aviation, yachts, estate, pay/trade/trust/coin, cars)
- 16. third parties, procurement & vendor risk management
- 17. operation, observability, emergency & restart plans
- 18. incidents & remedies (ethics, data protection, security)
- 19. metrics, KPIs & assurance (internal/external)
- 20. training, awareness & cultural change
- 21. implementation & roadmap (0-6 / 6-12 / 12-24 months)
- 22. rolls & RACI matrix
- 23. checklists (AIIA short, data release, go-live gate)
- 24. forms & templates (Model Card, Data Card, Incident Report)
- 25 Glossary & References
1. preamble & scope of application
This Code sets out binding principles, processes and controls for the development, procurement, operation and use of AI in the LEGIER Group. It applies Group-wide to employees, managers, processors, suppliers and partners.
It integrates existing Group guidelines (data protection, digital services processes, corporate governance, sustainability, human rights policy, modern slavery statement) and expands them to include AI-specific requirements.
Goal is to enable benefits and innovation, make risks manageable and protect the rights of users, customers and the general public.
2. basic values & guiding principles
- Human dignity and fundamental rights stand above economic efficiency. AI serves people - never the other way around.
- Legal conformity: Compliance with EU AI Act, GDPR, DSA and sector-specific standards. No use of prohibited practices.
- Responsibility & Accountability: A responsible owner is appointed for each AI system; decisions are traceable and contestable.
- Proportionality: Balance of purpose, risk, intensity of intervention and social impact.
- Transparency & explainability: Adequate information, documentation and communication channels on functionality, data situations and limitations.
- Fairness & inclusion: Systematic bias testing, protection of vulnerable groups, accessibility and multilingualism.
- Security & resilience: Security-by-design, defense-in-depth, continuous hardening and monitoring.
- Sustainability: Efficiency of models and data centers (energy, PUE/CFE), life cycle view of data/models.
3. governance & responsibilities (AI Ethics Board, RACI)
AI Ethics Board (AIEB): Interdisciplinary (tech, legal/compliance, data protection, security, editorial/product, people). Tasks: Updating policies, issuing approvals (especially high-risk), deciding on conflicts, monitoring reports.
Roles: Use Case Owner, Model Owner, Data Steward, DPO, Security Lead, Responsible Editor, Service Owner, Procurement Lead.
Boards & Gateways: AIIA approval before go-live; change advisory board for material changes; annual management reviews.
RACI principle: Clear assignment of responsibility for each activity (Responsible, Accountable, Consulted, Informed).
4. legal & standards framework (EU AI Act, GDPR, DSA, copyright law, trade law)
- EU-AI-Act: Risk-based framework with prohibitions, obligations for high-risk systems, documentation, logging, governance, transparency obligations; staggered applicability from 2025/2026.
- GDPR: Legal bases (Art. 6/9), rights of data subjects, privacy by design/default, data protection impact assessment (DPIA), third country transfers (Art. 44 et seq.).
- DSA: Platform processes for notification, complaints, transparency reports, risk assessments of large platforms.
- Copyright & ancillary copyrights / personal rights: Clear license chains, image/name rights, third-party domiciliary rights.
- Industry-specific requirements (e.g. aviation/maritime law/health) must also be complied with.
5. risk classification & AI Impact Assessment (AIIA)
Classification:
- Prohibited practices (not permitted)
- High-risk systems (strict obligations)
- Limited risk (transparency)
- Minimal risk
AIIA procedure: Description Purpose/scope, stakeholders, legal basis, data sources; risk analysis (legal, ethical, safety, bias, environmental impact); mitigation plan; decision (AIEB approval).
Re-assessments: For material changes, annually for high risk; documentation in the central register.
6. data ethics & data protection (legal basis, DPIA, cookies, third country)
- Data minimization & purpose limitation; Pseudonymization/anonymization preferred.
- Transparency: Data protection information, information and deletion channels; portability; objection options.
- Cookies/Tracking: Consent management; revocation; IP anonymization; only approved tools.
- Third country transfers: Only with suitable guarantees (SCC/adequacy); regular testing of subprocessors.
- DPIA: Mandatory for high-risk processing; document technical/organizational measures (TOMs).
7. model & data lifecycle (ML lifecycle, data cards, model cards)
Data Lifecycle: Acquisition → Curation → Labeling → Quality gates → Versioning → Retention/Deletion.
Model Lifecycle: Problem definition → Architecture selection → Training/finetuning → Evaluation (offline/online) → Release → Operation → Monitoring → Retraining/retirement.
Data Cards: Origin, representativeness, quality, bias findings, restrictions on use.
Model Cards: Purpose, training data, benchmarks, metrics, limitations, expected error patterns, do's/don'ts.
Provenance & Reproducibility: Hashes, data/model versions, pipeline proofs.
8. transparency, explainability & user instructions
- Labeling for AI interaction and AI-generated content.
- Explainability: Explanations tailored to the use case in layman's terms (local/global).
- User instructions: Purpose, main influencing factors, limits; feedback and correction methods.
9. human-in-the-loop & supervisory duties
- Human oversight as standard for relevant decisions (especially high risk).
- Four-eyes principle for editorially/socially sensitive assignments.
- Override/cancel functions; escalation paths; documentation.
10. security, robustness & red-teaming (prompt injection, jailbreaks)
- Threat modeling (STRIDE + AI-specific): Prompt injection, training data poisoning, model theft, data protection leakage.
- Red-teaming & adversarial tests; jailbreak prevention; rate limiting; output filtering; secret scanning.
- Robustness: Fallback prompts, guardrails, rollback plans; canary releases; chaos tests for safety.
11. supply chain, human rights & fair labor (Modern Slavery, LkSG-analog)
- Human rights due diligence: Risk analysis, supplier code of conduct, contractual commitments, audits, remedial action.
- Modern Slavery: Annual declaration, sensitization, reporting channels.
- Labor standards: Fair pay, working hours, health protection; protection of whistleblowers.
12. bias management, fairness & inclusion (vulnerable customers, accessibility)
- Bias checks: Data set analyses, balancing, various test groups, fairness metrics; documented mitigation.
- Customers at risk: Protection goals, alternative channels, clear language; no exploitation of cognitive weaknesses.
- Accessibility: WCAG-conformity; multilingualism; inclusive approach.
13. generative AI, proof of origin & labeling (C2PA, watermark)
- Labeling: Visible labels/metadata for AI content; notice for interactions.
- Guarantees of origin: C2PA-context, signatures/watermarks as far as technically possible.
- Copyrights/property rights: Clarify licenses; training data compliance; document rights chain.
14. content, moderation & DSA processes (reporting, complaints, transparency)
- Reporting channels: Low-threshold user reporting; prioritized processing of illegal content.
- Complaints processes: Transparent justification, objection, escalation.
- Transparency reports: Periodic publication of relevant key figures and measures.
15. domain-specific use (news, data, health, aviation, yachts, estate, pay/trade/trust/coin, cars)
- News/Publishing: Research assistance, translation, moderation; clear labeling of generative content.
- SCANDIC DATA: Secure AI/HPC infrastructure, client separation, HSM/KMS, observability, compliance artifacts.
- Health: Evidence-based use, human final decision, no untested diagnoses.
- Aviation/Yachts: Safety processes, human supervision, emergency procedures.
- Estate: Valuation models with fairness checks; ESG integration.
- Pay/Trade/Trust/Coin: Fraud prevention, KYC/AML, market surveillance, explainable decisions.
- Cars: Personalized services with strict data protection.
16. third parties, procurement & vendor risk management
- Due diligence before onboarding: Security/data protection level, data locations, subprocessors, certificates.
- Contracts: Audit rights, transparency and remediation clauses, SLA/OLA metrics.
- Monitoring: Performance KPIs, exchange of findings/incidents, exit plans.
17. operation, observability, emergency & restart plans
- Operation: Observability (logs, metrics, traces), SLO/SLI management, capacity planning.
- Emergency: Runbooks, DR tests, recovery times, communication plans.
- Configuration/secret management: Least-privilege, rotations, hardening.
18. incidents & remedies (ethics, data protection, security)
- Ethics incidents: Unwanted discrimination, disinformation, unclear origin - immediate measures and AIEB review.
- Data protection incidents: Reporting processes to DPO/supervision; information for affected parties; root cause analysis.
- Security incidents: CSIRT procedures, forensics, lessons learned, preventive measures.
19. metrics, KPIs & assurance (internal/external)
- Mandatory KPIs: 100 % AIIA coverage of productive AI use cases; 95 % training rate; 0 open critical audit findings.
- Fairness metrics: Disparate Impact, Equalized Odds (use-case specific).
- Sustainability: Energy/PUE/carbon figures of the data centers; efficiency of the models.
20. training, awareness & cultural change
- Mandatory training (annually): AI ethics, data protection, security, media ethics; target group-specific modules.
- Awareness campaigns: Guides, brown-bag sessions, consultation hours; internal communities of practice.
- Culture: Leadership role model function, error culture, rewarding responsible action.
21. implementation & roadmap (0-6 / 6-12 / 12-24 months)
- 0-6 months: Inventory of AI use cases; AIIA process; minimum controls; training wave; supplier screening.
- 6-12 months: Roll out red teaming; first transparency reports; energy program; finalize RACI.
- 12-24 months: ISO/IEC-42001 alignment; limited assurance; continuous improvement; CSRD/ESRS preparation (if applicable).
22. rolls & RACI matrix
- Use Case Owner (A): Purpose, benefits, KPIs, budget, reassessments.
- Model-Owner (R): Data/Training/Eval, Model Card, Drift Monitoring.
- DPO (C/A for data protection): Legal basis, DPIA, rights of data subjects.
- Security Lead (C): Threat modeling, red teaming, TOMs.
- Responsible Editor (C): Media ethics, labeling, correction register.
- Service Owner (R): Operation, SLO, incident management.
- Procurement Lead (R/C): Third parties, contracts, exit plans.
23. checklists (AIIA short, data release, go-live gate)
- AIIA quick check: Purpose? Legal basis? Affected parties? Risks (legal/ethical/security/bias/environmental)? Mitigation? HIL controls?
- Data release: Source lawful? Minimization? Retention? Access? Third country?
- Go-Live-Gate: Artifacts complete (data/model cards, logs)? Red Team results addressed? Monitoring/DR set up?
24. forms & templates (Model Card, Data Card, Incident Report)
- Model-Card-Template: Purpose, data, training, benchmarks, limitations, risks, responsible persons, contact.
- Data-Card-Template: Origin, license, quality, representativeness, bias checks, usage restrictions.
- Incident report template: Incident, effects, affected persons, immediate measures, root cause, remedy, lessons learned.
25 Glossary & References
Glossary: AI system, generative AI, high-risk system, AIIA, HIL, C2PA, red teaming, DPIA, RACI, SLO/SLI.
References:
- EU AI Act
- GDPR
- DSA
- OECD AI Principles
- NIST AI RMF
- ISO/IEC 42001
- Internal guidelines (data protection, DSA processes, modern slavery, sustainability)
Note: This AI Code supplements existing LEGIER guidelines, such as, among others: (Data Protection, Digital Services, Human Rights/Supply Chain, Corporate Governance, Sustainability, Modern Slavery). It is an integral part of the compliance framework of the LEGIER Group (LEGIER Beteiligungs mbH).