Special track on Secure AI co-located in IEEE BDS 2026 July 27-30, 2026, Fukuoka, Japan
Artificial Intelligence (AI)- driven technologies have continued to proliferate across every sector, including healthcare, finance, transportation, education, and more, and are increasingly embedded in our daily lives. As a result, making decisions for such autonomous systems may have significant consequences, often without human intervention. While these advancements bring efficiency and innovation, they also raise critical concerns around security, privacy, and ethical accountability. As the human role in decision-making is progressively reduced, it becomes imperative to treat AI security and privacy not as afterthoughts, but as core design principles. AI systems are susceptible to a range of threats, including adversarial attacks, data poisoning, model extraction, and behavioral manipulation. Simultaneously, they can pose privacy risks through unintended information leakage, the misuse of personal data, or a lack of transparency in decision-making logic. These issues not only compromise system integrity but can also erode public trust in AI.
This conference is part of IEEE CISOSE Congress. The SecAI conference aims to foster interdisciplinary discussion and explore novel approaches that address the pressing challenges at the intersection of AI security, privacy, and ethics. By bringing together researchers, practitioners, and policymakers, this symposium seeks to deepen understanding and stimulate collaboration toward the development of robust, trustworthy, and ethically aligned AI systems.
AI security and privacy
- Adversarial machine learning
- Distributed/federated learning
- Machine Unlearning
- AI approaches to trust and reputation
- AI misuse (e.g., misinformation, deepfakes)
- Machine learning and computer security
- Privacy-enhancing technologies, anonymity, and censorship (e.g., Differential privacy in AI)
- Security and privacy of Large Language Models (LLMs)
- Secure Large AI Systems and Models
- Large AI Systems and Models’ Privacy and Security Vulnerabilities
- Copyright of AI
- AI, surveillance, and privacy
AI ethics, society, and safety
- Governance, regulation, control, safety, and security of AI
- Value alignment and moral decision making
- Interpretability, explainability, and transparency of AI
- Fairness, equity, and equality in AI
- Human-centered AI, human-AI interaction, and human and AI teaming
- Ethical models/frameworks around AI and data
- AI, lawmaking, and the judiciary
- AI in public administration, social service provision, and social good
- AI, markets, and competition
- Safety in AI-based system architectures
- Detection and mitigation of AI safety risks
- Avoiding adverse side effects in AI-based systems
- Regulating AI-based systems: safety standards and certification
- Evaluation platforms for AI safety
- AI safety education and awareness
- Safety and ethical issues of Generative AI
- AI, health, and wellbeing
- AI and creativity, literature and the arts
- AI, democracy, and social movements
- Cultural, geopolitical, economic, employment, and other societal impacts of AI
- Environmental costs and climate impacts of AI
Track Chairs:
Monowar Bhuyan, Umeå University, Sweden
Michele Carminati, Politecnico di Milano, Italy
Web chair:
Adil Bin Bhutto, Umeå University, Sweden
Steering Committee
- Jerry Gao (Chair), San Jose State University, USA
- Guido Wirtz, Bamberg University, Germany
- Huaiming Wang, National U. of Defense Tech., China
- Jie Xu, University of Leeds, UK
- WeiTek Tsai, Arizona State University, USA
- Axel Kupper, Technische U. Berlin, Germany
- Hong Zhu, Oxford Brookes University, UK
- Longbin Cao, U. of Technology Sydney, Australia
- Cristian Borcea, New Jersey Institute of Tech., USA
- Hiroyuki Sato, National Institute of Informatics, Japan
- Kuo-Ming Chao, Bournemouth University, UK