Administrative Trends in Korea
AI Deepfake Analysis Model: Advancing Scientific Investigations in Korea
The Ministry of the Interior and Safety (Minister Yun Hojung), in collaboration with the National Forensic Service, has successfully developed and verified an AI-based Deepfake Analysis Model designed to determine the authenticity of suspected deepfake images, videos, and audio. The development and testing were completed by April, and the model has already been applied in criminal investigations over the past two months.
Between May and June this year, the model was used by frontline investigative agencies such as the Korean National Police Agency, successfully analyzing 60 pieces of evidence across 15 deepfake-related cases. 13 cases were linked to the 21st presidential election, involving deepfake materials related to candidates. 2 cases concerned digital sex crimes.

During the presidential election, the model was shared with the National Election Commission, where it detected and disposed of
illegal deepfake campaign materials from online platforms such as Youtube.
This development marks a cornerstone by officially enabling deepfake forensic analysis in Korea – a task that was previously impossible due to technical limitations. It also establishes an investigative system grounded in scientific evidence.
The urgency of this advancement is highlighted by the rapid surge in deepfake-related crimes. Requests for corrective action on deepfake sexual exploitation materials rose sharply from 3,574 cases in 2022 to 7,187 in 2023, and further to 23,107 in 2024. Until now, investigative agencies faced significant challenges in analyzing such evidence due to the lack of reliable detection technologies. The new AI model was developed precisely to overcome these limitations.
The new AI model automatically detects traces of manipulation in suspected files and can estimate synthesis probability as well as time-based alteration rates, enabling investigators to quickly determine whether materials are deepfakes.
It has been designed to perform effectively even in real-world investigative environments, including in cases where evidence has degraded in quality due to repeated uploads or downloads. The model is capable of detecting manipulation in specific facial features, such as the eyes, nose, and mouth, and retains strong analytical capacity even when data loss or sound quality issues are present.
The Ministry of the Interior and Safety and the National Forensic Service plan to link this model with ‘The AI Voice Phishing Analysis Model’, developed in 2023, to maximize investigate synergy. Together, these tools are expected not only to identify whether an item is a deepfake but also to determine whether it was created by mimicking or synthesizing the voice of a particular politician or individual.
Moving forward, the ministry and the National Forensic Service intend to expand the application of the deepfake analysis model to a wider range of institutions. Agencies such as the Ministry of Gender Equality and Family and the Korea Communications Commission, which also face challenges related to deepfake materials, will gradually be provided access to the model. This expansion aims to strengthen national capabilities for detecting and responding to deepfake crimes across multiple sectors.
· Source: Ministry of the Interior and Safety
Between May and June this year, the model was used by frontline investigative agencies such as the Korean National Police Agency, successfully analyzing 60 pieces of evidence across 15 deepfake-related cases. 13 cases were linked to the 21st presidential election, involving deepfake materials related to candidates. 2 cases concerned digital sex crimes.

This development marks a cornerstone by officially enabling deepfake forensic analysis in Korea – a task that was previously impossible due to technical limitations. It also establishes an investigative system grounded in scientific evidence.
The urgency of this advancement is highlighted by the rapid surge in deepfake-related crimes. Requests for corrective action on deepfake sexual exploitation materials rose sharply from 3,574 cases in 2022 to 7,187 in 2023, and further to 23,107 in 2024. Until now, investigative agencies faced significant challenges in analyzing such evidence due to the lack of reliable detection technologies. The new AI model was developed precisely to overcome these limitations.
The new AI model automatically detects traces of manipulation in suspected files and can estimate synthesis probability as well as time-based alteration rates, enabling investigators to quickly determine whether materials are deepfakes.
It has been designed to perform effectively even in real-world investigative environments, including in cases where evidence has degraded in quality due to repeated uploads or downloads. The model is capable of detecting manipulation in specific facial features, such as the eyes, nose, and mouth, and retains strong analytical capacity even when data loss or sound quality issues are present.
The Ministry of the Interior and Safety and the National Forensic Service plan to link this model with ‘The AI Voice Phishing Analysis Model’, developed in 2023, to maximize investigate synergy. Together, these tools are expected not only to identify whether an item is a deepfake but also to determine whether it was created by mimicking or synthesizing the voice of a particular politician or individual.
Moving forward, the ministry and the National Forensic Service intend to expand the application of the deepfake analysis model to a wider range of institutions. Agencies such as the Ministry of Gender Equality and Family and the Korea Communications Commission, which also face challenges related to deepfake materials, will gradually be provided access to the model. This expansion aims to strengthen national capabilities for detecting and responding to deepfake crimes across multiple sectors.
· Source: Ministry of the Interior and Safety