Call for Papers
We invite high-quality, original contributions that advance the theory and practice of Next Generation AI Systems.
Research Tracks
The NGEN-AI Conference 2025 welcomes research, experience, and vision papers that explore foundational methods, systems, and applications of next generation AI. Topics of interest for each track include, but are not limited to, the following:
Federated Learning
The Federated Learning track focuses on learning paradigms that enable models to be trained collaboratively across devices and organizations without centralizing raw data. We are particularly interested in approaches that push computation closer to the edge while preserving privacy, robustness, and scalability in realistic deployments.
Contributions may address algorithms for heterogeneous and non-IID data, personalization strategies, communication-efficient protocols, secure aggregation and differential privacy, resource-aware federated learning on constrained hardware, as well as demonstrators and case studies in domains such as healthcare, finance, smart cities, and industrial IoT. Research topics in this track include but not limited to:
- Architectures for cross-device and cross-silo federated learning
- Federated optimization under non-IID, sparse, or unbalanced data distributions
- Personalized and on-device adaptation strategies in federated settings
- Communication-efficient FL (compression, sparsification, update scheduling)
- Privacy-preserving FL: secure aggregation, differential privacy, homomorphic encryption
- Robustness to poisoning, backdoor, and Byzantine attacks in federated scenarios
- Energy- and resource-aware FL on mobile, edge, and IoT devices
- Federated learning in vertical, horizontal, and hybrid data partitioning settings
- Federated analytics and federated evaluation techniques
- MLOps for FL: lifecycle management, monitoring, and deployment at scale
- Benchmarking, simulators, datasets, and reproducibility studies for FL
- Real-world applications in healthcare, finance, smart industry, and smart cities
- Regulatory, ethical, and governance aspects of federated and collaborative learning
Small & Large Language Models and Generative AI
This track targets advances in small and large language models (SLMs and LLMs) and other generative AI models that synthesize and reason over text, code, images, audio, and multimodal data. We welcome work on model architectures, training strategies, and deployment techniques that improve performance, controllability, safety, and efficiency of foundation and generative models in real-world settings.
Relevant contributions span from core modelling and optimization to evaluation, alignment, and application-driven studies, including systems that tightly integrate language and generative models with external tools, data sources, and complex software architectures. Research topics in this track include but are not limited to:
- Architectures and training recipes for SLMs, LLMs, and foundation models
- Pre-training, instruction-tuning, alignment (e.g., RLHF, DPO, preference optimization)
- Domain-specific and compact SLMs for on-device and resource-constrained settings
- Prompt engineering, in-context learning, function calling, and tool-augmented pipelines
- Retrieval-augmented generation and knowledge-grounded generative models
- Generative models for text, code, images, audio, video, and multimodal content
- Model compression, distillation, quantization, and sparsity for efficient deployment
- Edge and on-device deployment of SLMs/LLMs and generative models
- Safety, robustness, and red-teaming of generative systems (toxicity, hallucinations, bias)
- Evaluation methodologies, benchmarks, and human-in-the-loop assessment
- Generative AI for scientific discovery, simulation, and data augmentation
- Software engineering with LLMs: code generation, refactoring, testing, and verification
- Governance, transparency, IP, and regulatory aspects of foundation and generative models
Deep Learning Architectures & Representation Learning
This track focuses on advances in deep learning architectures and representation learning methods that underpin next generation AI systems. We invite contributions that improve expressiveness, robustness, interpretability, and efficiency across supervised, unsupervised, and self-supervised learning paradigms.
We particularly encourage work that bridges novel architectures with deployment constraints, addresses data scarcity and bias, or opens up new application domains. Research topics in this track include but are not limited to:
- Novel neural architectures (transformers, graph neural networks, diffusion models, etc.)
- Self-supervised, contrastive, and representation learning at scale
- Multimodal learning and fusion of heterogeneous data sources
- Curriculum learning, meta-learning, and continual / lifelong learning
- Robust and certified deep learning under distribution shift and adversarial attacks
- Interpretable and explainable deep learning methods
- Data-centric AI: dataset curation, quality, and augmentation strategies
- Efficient training and inference: pruning, low-rank adaptation, and sparse models
- Neural architecture search and automated model design
- Applications of deep learning in vision, language, time series, recommender systems, and beyond
Agentic AI
This track concentrates on agentic AI systems that perceive, reason, plan, and act over extended time horizons—often in dynamic environments and in collaboration with humans or other agents. We are interested in both theoretical foundations and practical deployments of autonomous and semi-autonomous agents in digital and physical settings.
We particularly encourage submissions that connect planning and decision making with learning, perception, and interaction, and that critically examine the reliability, safety, and societal impact of agentic AI. Research topics in this track include but not limited to:
- Architectures for autonomous, semi-autonomous, and mixed-initiative agents
- Planning, reasoning, and long-horizon decision making for agentic systems
- Reinforcement learning, hierarchical RL, and model-based control for agents
- LLM-driven agents, tool-using agents, and workflow / task orchestration
- Multi-agent systems: coordination, negotiation, communication, and cooperation
- Human–agent interaction, explainability, and trust in agentic AI systems
- Safety, verification, alignment, and oversight for autonomous agents
- Simulation environments, digital twins, and benchmarks for agentic AI
- Agents in robotics, autonomous vehicles, logistics, smart grids, and IoT environments
- Social, economic, and ethical implications of pervasive agentic AI
- Engineering methodologies, software frameworks, and tooling for large-scale agent systems
- Hybrid symbolic–subsymbolic approaches for reasoning and acting
MLOps, AI Engineering & Lifecycle Management
This track addresses the engineering and operational aspects of building, deploying, and maintaining AI systems in production. It covers the full lifecycle from data and model pipelines to monitoring, governance, and socio-technical considerations in organizations.
We invite contributions that connect software engineering, DevOps, and platform engineering practices with the unique requirements of machine learning and foundation models. Research topics in this track include but are not limited to:
- MLOps platforms and infrastructure for scalable training and deployment
- CI/CD for ML, continuous training, and continuous evaluation
- Data and feature management: data versioning, feature stores, and lineage tracking
- Monitoring, observability, and incident response for AI systems
- Model governance, risk management, and compliance (e.g., AI Act, sectoral regulation)
- Testing, debugging, and quality assurance for ML components and pipelines
- Infrastructure for serving LLMs and generative models at scale
- Cost- and energy-aware deployment and scheduling of AI workloads
- Organizational processes and roles for AI/ML teams
- Case studies and lessons learned from real-world AI production deployments
Explainable AI (XAI) & Transparency
This track focuses on methods and practices that make next generation AI systems understandable, transparent, and auditable for humans. We welcome contributions that improve how AI systems explain their decisions and behaviors, enabling trust, accountability, and effective human oversight in real-world deployments.
We encourage work spanning intrinsic interpretability and post-hoc explanations, explanation quality evaluation, and human-centered design of explanations, including applications to foundation models, agentic systems, and distributed AI settings. Research topics in this track include but are not limited to:
- Post-hoc explanations (e.g., feature attribution, saliency, local surrogate models)
- Intrinsic interpretability and transparent model design
- Counterfactual and contrastive explanations
- Uncertainty estimation, calibration, and communicating confidence to users
- Explainability for LLMs and generative AI (faithfulness, grounding, rationale analysis)
- Explainability in federated, privacy-preserving, and edge AI settings
- Explainable decision making for agentic and multi-agent systems
- Human-centered explanation design, usability, and user studies
- Evaluation and benchmarking of explanations (faithfulness, robustness, usefulness)
- Auditing, debugging, and root-cause analysis for AI systems
- Transparency documentation (e.g., model cards, datasheets) and reporting standards
- Regulatory, ethical, and governance aspects related to transparency and explainability
Trustworthy, Responsible & Sustainable AI
This track focuses on the qualities that enable next generation AI systems to be adopted and relied upon in society: trustworthiness, responsibility, and sustainability. We welcome contributions that align AI systems with human values and public expectations, ensuring they are safe, fair, transparent, and robust across real-world contexts.
We particularly encourage work that connects technical advances with governance and socio-technical practices, including evaluation methodologies and lifecycle approaches that reduce risk and environmental impact while improving accountability. Research topics in this track include but are not limited to:
- Trustworthiness by design: safety, reliability, and robustness under distribution shift
- Fairness, bias mitigation, and inclusive AI across populations and contexts
- Accountability, transparency, and auditability in AI systems
- Human values and alignment: human-centered objectives, oversight, and control
- Responsible AI governance: policies, risk management, and compliance practices
- Privacy, security, and protection against adversarial and data poisoning attacks
- Evaluation frameworks, metrics, and benchmarks for trustworthy and responsible AI
- Monitoring and lifecycle management for responsible AI in production
- Sustainable AI: energy-efficient training/inference, green AI, and carbon-aware operation
- Responsible data practices: provenance, consent, documentation, and data stewardship
- Socio-technical studies of AI adoption, impact, and organizational readiness
- Case studies and lessons learned from responsible and sustainable AI deployments
AI Systems, Hardware & Edge/Cloud Infrastructures
This track focuses on systems, architectures, and hardware platforms that enable efficient and sustainable execution of next generation AI workloads. Submissions may address full-stack co-design from algorithms and compilers down to accelerators and distributed infrastructures.
We particularly welcome work that bridges AI models with constraints of real-world platforms, including edge devices, heterogeneous clusters, and specialized hardware. Research topics in this track include but are not limited to:
- Distributed and parallel systems for large-scale training and inference
- Scheduling and placement of AI workloads across edge, fog, and cloud
- Hardware accelerators (GPUs, TPUs, NPUs, FPGAs) and co-design for AI
- Systems support for LLMs and foundation models (sharding, offloading, caching)
- Energy-efficient and green AI computing, including carbon-aware orchestration
- Runtime systems, compilers, and libraries for AI workloads
- Edge AI and embedded AI for IoT, CPS, and real-time applications
- Resilience, fault tolerance, and reliability of AI systems and infrastructures
- Benchmarks, performance analysis, and optimization of AI systems
Applications & Societal Impact of Next Generation AI
This track brings together application-driven research and critical perspectives on the impact of next generation AI systems on individuals, organizations, and society. We welcome studies that combine technical innovation with domain insights, as well as empirical analyses of adoption, impact, and governance.
Interdisciplinary work at the intersection of AI, human–computer interaction, social sciences, and policy is particularly encouraged. Research topics in this track include but are not limited to:
- Next generation AI applications in healthcare, finance, education, mobility, and industry
- AI for sustainability, climate, energy, and environmental monitoring
- Human–AI collaboration, co-creation, and augmented decision making
- Fairness, accountability, transparency, and ethics in AI systems
- Regulation, standards, and governance frameworks for AI
- Socio-technical analyses of AI deployment and organizational transformation
- User studies, field deployments, and longitudinal evaluations
- Public sector and civic applications of AI (e-government, public services, smart cities)
- Education, upskilling, and capacity building for AI-literate societies