Important Dates
Main Papers:
- April 1st, 2026 Submission deadline
- May 10th, 2026 Author’s notification
- June 1st, 2026 Camera-ready and author’s registration
Topics of Interest
Testing & Verification of AI Systems:
- Verification & Validation Methodologies & Frameworks
- Test Oracles & Adequacy Metrics
- Deep Learning & Reinforcement Learning Testing
- Adversarial Robustness & Safety Assurance
- Tools & Resources for Automated Testing of AI
AI-Driven Software Testing (AI4Test):
- Autonomous Test Orchestration
- Self-Healing & Automated Maintenance
- Search-Based & Optimized Testing
- Crowdsourced & Swarm Intelligence
- Verification of AI-Generated Code
Assurance of Agentic AI & Large Models:
- Agentic Reliability & Goal-Reasoning
- Multi-Agent Coordination & Swarm Validation
- LLM Quality, Trust, Fairness, Bias, & Ethics
- Multimodal & Generative Model Evaluation
- Test of Large Models of Their Applications
Human-Machine Collaboration:
- Assurance of Human-AI Pair Programming
- Human-in-the-Loop (HITL) Verification
- Collaborative Debugging & Interactive Testing
- Human-AI Interaction (HAI) Metrics for QA
- AI Impact on CS Education & Assessment
Data, Policy, & Observability:
- Data Quality Validation & Enhancement for AI
- Quality Assurance for Unstructured Training Data
- Synthetic Data Generation & Validation
- Large-scale Unstructured Data Quality Certification
- AI & Data Management Policies
Domain-Specific & Physical AI:
- Embodied AI: Robotics, Drones, Autonomous Vehicles
- Computer Vision & NLP Quality Assurance
- Testing AI for Critical Domain Applications
- Cognitive Industry & Smart Manufacturing
- Identity & Security Verification
Submission
Submit original manuscripts (not published or submitted elsewhere) with the following page limits:
regular papers (8 pages),
short papers (4 pages),
AI testing in practice (8 pages),
and tool demo track (6 pages).
We welcome submissions of both regular research papers that describe original and significant work or reports on case studies and empirical research and short papers that describe late-breaking research results or work in progress with timely and innovative ideas. All types of papers can have 2 extra pages subject to page charges.
The AI Testing in Practice Track provides a forum for networking, exchanging ideas and innovative or experimental practices to address SE research that directly impacts the practice of software testing for AI. The tool track provides a forum to present and demonstrate innovative tools and/or new benchmarking datasets in the context of software testing for AI.
All papers must be written in English. Papers must include a title, an abstract, and a list of 4-6 keywords. All papers must be prepared in the IEEE double-column proceedings format. Authors submit their papers via the link by April 1, 2026, 23:59 AoE. For more information, please visit the conference website.
The use of content generated by AI in an article (including but not limited to text, figures, images, and code) shall be disclosed in the acknowledgments section of the submitted article. All accepted papers will be published by IEEE Computer Society Press (EI-Index) and included in the IEEE Digital Library.