Workshop on Secure and Trustworthy AI
7 September 2026 in Naples, Italy
co-located with the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases

Call for Papers

Important Dates

  • Paper submission deadline: June 5th, 2026, 2026 (AoE, UTC-12)
  • Acceptance notification: June 27th, 2026 (AoE, UTC-12)
  • Camera ready due: July 10th, 2026 (tentative)
  • Workshop day: September 7th, 2026

Overview

The increasing adoption of Artificial Intelligence (AI) technologies in critical infrastructure and decision-making processes has made AI-driven components a foundational part of complex software, cyber-physical, and socio-technical systems (e.g., malware detection, fraud detection, autonomous driving, and biometric systems). In these settings, AI outputs directly influence automated and human-in-the-loop decisions, making failures consequential beyond technical performance and raising fundamental concerns regarding the security, robustness, transparency, and trustworthiness of AI systems. While machine learning has demonstrated remarkable performance across a wide range of applications, a growing body of research has shown that AI systems are inherently vulnerable. Adversarial manipulation can compromise not only model predictions but also other critical properties of AI systems, exposing organizations and individuals to significant risks. Vulnerabilities such as adversarial examples, neural backdoors, bias, privacy leakage, and lack of transparency can undermine safety, reliability, and public trust, particularly in security- and safety-critical environments.

This workshop addresses the challenge of holistically securing AI systems beyond an accuracy-centric perspective. It focuses on vulnerabilities and defense strategies across the full AI lifecycle, including adversarial learning, security-critical AI applications, and the role of auxiliary components that support model deployment, interpretation, and human oversight. In particular, the workshop emphasizes the security implications of mechanisms such as explainability, uncertainty estimation, and system-level constraints, considering them as integral parts of AI systems rather than isolated add-ons.

Scope of Papers

STAI welcomes both research papers reporting results from mature work and recently published work, as well as more speculative papers describing new ideas or preliminary exploratory work. Papers reporting industry experiences and case studies will also be encouraged. Submissions are accepted in two formats:

  • Regular research papers with 12 to 16 pages, including references. Research papers must be original, not published previously, and not submitted concurrently elsewhere to be published in the proceedings.
  • Short research statements of at most 6 pages, including references. Research statements aim at fostering discussion and collaboration. They may review previously published research or outline emerging ideas. Papers based on recently published work will not be considered for publication in the proceedings.

Topics of Interest

Topics of interest include but are not limited to:

  • Trustworthy and secure-by-design training and AI pipelines
  • Adversarial machine learning
  • Evasion, poisoning, backdoor, jailbreak, physical-world, and supply-chain attacks
  • Prompt injection and security risks in foundation and generative models
  • Model extraction, inversion, membership inference, and other privacy attacks
  • Attacks on explanations, uncertainty estimation, confidence calibration, and sustainability constraints
  • Robustness and defenses against adversarial and system-level attacks
  • System-level robustness assessment beyond predictive accuracy
  • Trustworthiness evaluation metrics and holistic AI security benchmarks
  • Explainability of machine learning and deep learning models
  • Explainable AI for the explanation of security AI-based systems
  • Explainable AI to improve the accuracy of AI models
  • Explainable AI to improve the robustness of AI models against malicious attacks
  • Attacks on explainability methods and explanation manipulation
  • Privacy and information leakage through interpretability mechanisms
  • Privacy-preserving learning and differential privacy under adversarial settings
  • Human-in-the-loop security and adversarial decision manipulation
  • Security and trustworthiness of agentic and autonomous AI systems
  • Applications of AI to improve security in safety-critical applications (e.g., cybersecurity, fraud detection, biometrics, autonomous systems)
  • Artificial Intelligence for cyber threat detection (e.g., in malware detection, intrusion detection, spam detection)
  • Data-centric security, including poisoning detection, secure data curation, and lifecycle protection

Submission Guidelines

All submissions should be made in PDF using the Microsoft CMT and must adhere to the Springer LNCS style. Templates are available here. Tentatively, all regular workshop papers will be published in an LNCS proceedings volume (to be defined). At a minimum, a proceedings volume will be edited and published online.

Submissions must not substantially overlap with papers that have been published or that are simultaneously submitted to a journal or conference with proceedings. Also, authors should refer to their previous work in the third person. Accepted papers will be published as Springer LNCS proceedings. One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

All accepted submissions must be presented at the workshop. One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

Submission link: https://cmt3.research.microsoft.com/ECMLPKDDWT2026/Track/34/Submission/Create

Committee

Workshop Chairs

Program Committee

  • Andrea Ponte, University of Genova, Italy
  • Angelo Sotgiu, University of Cagliari, Italy
  • Annalisa Appice, University of Bari, Italy
  • Antonio Pecchia, University of Sannio, Italy
  • Cristian Manca, University of Cagliari, Italy
  • Dario Lazzato, University of Genova, Italy
  • Donato Malerba, University of Bari, Italy
  • Francesco Marcaldo, University of Molise, Italy
  • Giulio Rossolini, Scuola Superiore Sant'Anna, Italy
  • Hubert Baniecki, University of Warsaw, Poland
  • Lorenzo Cazzaro, University of Luxembourg, Luxembourg
  • Luca Melis, University of Cagliari, Italy
  • Thorsten Eisenhofer, CISPA Helmholtz Center for Information Security, Germany
  • Tommaso Zoppi, University of Florence, Italy