Description
With the rapid development of artificial intelligence (AI) technology, generative AI has become an emerging topic in recent years. Meanwhile, a large amount of AI-generated visual content has been produced, such as images, videos, 3D point clouds, and meshes, with wide applications in television, education, and business marketing, among many others. To produce such visual content, numerous generative models have been proposed, consisting of the generative adversarial network (GAN), diffusion model, contractive language image pre-training (CLIP), etc.
However, there exist many quality issues related to AI-generated content. For instance, the generated content may lose the semantic information of the input text prompts, posing challenges distinct from traditional quality assessment methods used for natural content. In addition, before applying AI-generated content to practical scenarios, it is usually required to process and filter out low-quality content, which relies on effective quality assessment methods specifically designed for AI-generated content. More critically, beyond content generation, AI technologies are increasingly being used for content enhancement—improving existing visual materials in terms of resolution, style, or other qualities. This adds another layer of complexity to quality assessment, as the metrics must evaluate not only the fidelity of generated content but also the impact of enhancements on the original information.
Proper quality assessment frameworks are essential for optimizing generative and enhancement models, paving the way for quality-driven advancements in AI technologies. This special session focuses on research contributions centered on the quality evaluation of AI-generated and AI-enhanced content. By establishing robust quality metrics and assessment tools, the session aims to support trustworthy and responsible AI development, ensuring that AI-generated and AI-enhanced content meets high standards of accuracy, relevance, and ethical responsibility. Such research is critical for refining existing models, developing new quality-driven methodologies, and enabling practical implementations across diverse applications. This gathering is expected to lay the groundwork for improved quality control across various AI applications, ultimately fostering greater public trust and broadening the positive impact of AI in creative, informational, and communicative domains.
Topics of Interest
We are seeking papers that include, but are not limited to, the following topics:
- Subjective quality assessment methods for evaluating AI-generated and AI-enhanced content.
- Objective quality metrics tailored to AI-driven content, including generation and/or enhancement processes.
- New benchmark datasets for the evaluation of quality in AI-generated and AI-enhanced content.
- Bias and fairness considerations in the generation and enhancement of AI-driven content.
- Quality-driven generative and/or enhancement models.
- Survey on quality evaluation methodologies for AI-generated and AI-enhanced content. Applications of quality assessment in practical scenarios involving both AI-generated and AI-enhanced content, across industries like media, education, and marketing.
Organizers
- Wei Zhou (ZhouW26@cardiff.ac.uk) Cardiff University, UK.
- Nabajeet Barman (Nabajeet.Barman@sony.com) Sony Interactive Entertainment.
- Saman Zadtootaghaj (Saman.Zadtootaghaj@sony.com) Sony Interactive Entertainment.
- Guangtao Zhai (zhaiguangtao@sjtu.edu.cn) Shanghai Jiao Tong University, China.
- Alan C. Bovik (bovik@ece.utexas.edu) University of Texas at Austin, USA.