Datasets
Standard Dataset
Gen-100
- Citation Author(s):
- Submitted by:
- YunZhuo Chen
- Last updated:
- Wed, 09/18/2024 - 23:56
- DOI:
- 10.21227/ygje-5x67
- Data Format:
- Research Article Link:
- License:
- Categories:
- Keywords:
Abstract
Recent advances in generative visual content have led to a quantum leap in the quality of artificially generated Deepfake content. Especially, diffusion models are causing growing concerns among communities due to their ever-increasing realism. However, quantifying the realism of generated content is still challenging. Existing evaluation metrics, such as Inception Score and Fréchet inception distance, fall short on benchmarking diffusion models due to the versatility of the generated images. To address this, we propose the \textbf{I}mage \textbf{R}ealism \textbf{S}core (IRS) evaluation metric, computed from five statistical measures of a given image. This non-learning-based metric not only efficiently quantifies the realism of generated images, but it is also a viable tool for detecting if an image is real or fake.
To facilitate further efforts towards the quantification of realism in diffusion-generated content, we also introduce a new dataset, Gen-100. It consists of 100 categories, each featuring 30 images produced using prompts from ChatGPT with various models, including Stable Diffusion Model (SDM),Dalle2, Midjourney, and BigGAN.
Gen-100 consists of 100 categories, each featuring 30 images produced using prompts from ChatGPT with various models, including Stable Diffusion Model (SDM), Dalle2, Midjourney, and BigGAN.