Chamara Sandeepa
AI Lead
PhD candidate and research engineer specialising in AI security, privacy, explainability, and quantitative robustness, leading GenShield AI's adversarial testing and LLM defence architecture.
Research & Impact
Chamara's doctoral research spans adversarial learning, distributed AI, and LLM security, resulting in more than fifteen peer-reviewed publications across tier-one venues. His work on privacy-aware explainability and robustness evaluation earned the UCD School of Computer Science Student Award for research and leadership.
At GenShield AI he translates those insights into operational safeguards that help telecom and critical infrastructure customers evidence resilience under the EU AI Act.
- Architects the adversarial testing and evaluation framework that exercises AI-based threat scenarios against production-scale pipelines.
- Designs quantitative robustness scoring for LLMs and ML models, balancing regulatory reporting with engineering feedback loops.
- Drives explainability and privacy reviews so red-team findings become actionable playbooks for customer assurance teams.