Workshop on Secure and Trustworthy AI
September 2026 in Naples, Italy
co-located with the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases

Call for Papers

Important Dates

  • Paper submission deadline: June 5th, 2026, 2026 (AoE, UTC-12)
  • Acceptance notification: July 27th, 2026 (AoE, UTC-12)
  • Camera ready due: July 10th, 2026 (tentative)
  • Workshop day: September, 2026

Overview

The increasing adoption of technologies based on Artificial Intelligence (AI) in critical infrastructure and decision-making processes has made AI-driven components a foundational part of complex software, cyber-physical, and socio-technical systems (e.g., malware detection, fraud detection, autonomous driving, or biometric systems). In these settings, AI outputs directly shape automated and human-in-the-loop decisions, making failures consequential beyond technical performance and raising fundamental concerns regarding the security, robustness, transparency, and trustworthiness of AI systems. The growing integration of AI into our daily lives exposes organizations and individuals to an increasing number of vulnerabilities. Vulnerabilities such as adversarial attacks, bias, and lack of explainability can undermine both safety and public trust.

The workshop addresses the challenges of securing AI systems in both adversarial settings and security-critical applications. It focuses on the security challenges in the design of AI systems, including their vulnerabilities in adversarial domains and defense strategies, as well as the integration of explainable AI (XAI) to improve transparency, accountability, and robustness. The workshop brings together researchers and practitioners from AI, cybersecurity, and security domains to advance trustworthy, secure, and accountable AI systems by discussing critical challenges, future works, and recent advancements in developing secure and trustworthy AI-based systems

Scope of Papers

SATAI welcomes both research papers reporting results from mature work and recently published work, as well as more speculative papers describing new ideas or preliminary exploratory work. Papers reporting industry experiences and case studies will also be encouraged. Submissions are accepted in two formats:

  • Regular research papers with 12 to 16 pages, including references. Research papers must be original, not published previously, and not submitted concurrently elsewhere to be published in the proceedings.
  • Short research statements of at most 6 pages, including references. Research statements aim at fostering discussion and collaboration. They may review previously published research or outline emerging ideas. Papers based on recently published work will not be considered for publication in the proceedings.

Topics of Interest

Topics of interest include but are not limited to:

  • Artificial Intelligence for the security of systems and software
  • Artificial Intelligence for cyber threat detection (e.g., in malware detection, intrusion detection, spam detection)
  • Adversarial machine learning
  • Evasion, poisoning, and backdoor attacks
  • Model extraction, inversion, and privacy attacks
  • Prompt injection attacks
  • Defenses against adversarial attacks
  • Robustness of AI models against malicious attacks
  • Explainability of machine learning and deep learning models
  • Explainable AI for the explanation of security AI-based systems
  • Explainable AI to improve the accuracy of AI models
  • Explainable AI to improve the robustness of AI models against malicious attacks
  • Attacks on Explainable AI
  • Explainable AI manipulation
  • Privacy and information leakage through explanations
  • Explainability of foundation models and generative AI
  • Security applications via foundation models and generative AI
  • Human-in-the-loop security and XAI
  • Data-centric security

Submission Guidelines

All submissions should be made in PDF using the Microsoft CMT and must adhere to the Springer LNCS style. Templates are available here. Tentatively, all regular workshop papers will be published in an LNCS proceedings volume (to be defined). At a minimum, a proceedings volume will be edited and published online.

Submissions must not substantially overlap with papers that have been published or that are simultaneously submitted to a journal or conference with proceedings. Also, authors should refer to their previous work in the third person. Accepted papers will be published in IEEE Xplore. One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.

Committee

Workshop Chairs

Program Committee

to be announced...