電腦科學與資訊工程科 Computer Science & Information Engineering
190034 United States
Integrity: Generalized Artificial Image Classification With Noise Domain Localization
AI image generation models can now create and augment photographs at scale. Recent reports predict that AI-assisted misinformation will rank above warfare and natural disasters as the greatest threat to global economic security by 2027. The AI-generated image detector space has struggled to keep pace with improved generated image quality, making it challenging for an untrained eye to detect AI content. Existing detectors fail when tested on unknown generators, are resource-intensive, rely heavily on machine learning, and are vulnerable to attacks. This paper introduces Integrity, a software tool without machine learning that detects images generated by any model by analyzing statistical deviations in the noise pattern, looking for indicators of authenticity rather than signs of artificialness. An original dataset of high-resolution authentic images was paired with artificial images from multiple models and run through Integrity’s algorithm. Authentic image scores were empirically determined to fit within a threshold inconsistent with AI content. A working algorithm was achieved with an average classification accuracy of 97.24% on the custom dataset, reducing the computational cost and outperforming other detectors by more than 23%. Integrity also identifies small regions of manipulated authentic images, includes a built-in protection mechanism for detecting potential attacks, and is a comprehensive tool using a novel statistical approach for detecting AI-generated image content on a localized level. Integrity enhances the accuracy, speed, security, and efficiency of ML-based detectors, offering the promise of global reach and creating an opportunity for more people to access image authentication than ever before.