Trustworthy AI is not a slogan—it is a measurable engineering discipline. The EU’s investment in trustworthy AI spans research (€1.6 billion in AI through Horizon Europe), standardisation (CEN/CENELEC developing harmonised standards for AI Act conformity), and governance (the EU AI Office and national market surveillance authorities now fully operational). But for organisations building and deploying AI, the challenge is operational: how do you demonstrate trustworthiness in a way that satisfies regulators, earns buyer confidence, and enables scaling?
Systematic methods for designing evaluation programmes that test performance, robustness, generalisation, and drift—producing evidence that supports both internal confidence and external assurance. Our approach aligns with emerging CEN/CENELEC standards to ensure evaluation evidence satisfies conformity assessment requirements.
Context-appropriate methods for identifying, measuring, and mitigating bias in AI systems—including practical approaches to the EU AI Act’s provisions for processing sensitive personal data exclusively for bias detection and correction.
Operational models for human oversight that balance automation benefits with meaningful human control—including role design, escalation pathways, decision transparency, and training for oversight personnel.
Post-deployment governance architectures that detect performance degradation, trigger appropriate responses, and maintain audit trails—ensuring that AI systems remain trustworthy throughout their operational lifecycle.
Our research develops methods for packaging trustworthiness evidence for different audiences: procurement teams, regulators, end-users, and investors. Each audience requires different evidence at different levels of technical detail—and getting this right is critical for commercial success.