AI-Based Deepfake Verification Protocols for Legal Evidence: A Forensic and Explainable AI Framework
- DOI
- 10.2991/978-94-6239-610-4_30How to use a DOI?
- Keywords
- Algorithmic explainability; Chain-of-custody; Deepfake detection; Evidence authenticity; Forensic hashing; Multi-method detection; Provenance metadata; Signed provenance
- Abstract
The growing level of sophistication of artificial intelligence has made it possible to create highly realistic synthetic media, also known as deepfakes, which are posing significant limitations to the admissibility and reliability of digital evidences. Deepfakes involving audio, video, and visual content are increasingly being witnessed in the civil, and criminal cases and require strong and standardized forensic verification systems.
There are however frequently missing protocols that are AI-responsive, reproducible and legally interpretable to deal with such synthetic manipulations to exist in digital forensic practices. The present paper suggests an all-encompassing AI-based deepfake verification procedure aimed at forensic and judicial practices. The framework combines the evidence of provenance metadata analysis, forensic hashing, multi-modes and multi-method detection, explainable artificial intelligence (XAI) elements, and benchmark dataset validation. Specific attention is paid to conforming technical detection procedures to the legal standards, i.e. transparency, preservation of a chain-of-custody, repeatability and expert reporting procedures.
The protocol (being proposed) embraces a defence in depth strategy, whereby standard forensic analysis is implemented alongside several autonomous AI detectors to improve reliability and the communication of uncertainty and limitations. This research provides a viable and legally justifiable roadmap that, by sewing the divide between current deep fake detection tools and the traditional legal and forensic standards of forensic laboratories, investigators and even courts can use in evaluating the authenticity of AI-generated media to maintain a sense of trust in the use of digital evidence.
- Copyright
- © 2026 The Author(s)
- Open Access
- Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
Cite this article
TY - CONF AU - Rishika Paseband AU - Dayana Sebastian AU - Jugal Narule PY - 2026 DA - 2026/05/05 TI - AI-Based Deepfake Verification Protocols for Legal Evidence: A Forensic and Explainable AI Framework BT - Proceedings of the First International Conference on Advances in Forensics and Cyber Technologies (ICFACT 2025) PB - Atlantis Press SP - 345 EP - 351 SN - 2352-538X UR - https://doi.org/10.2991/978-94-6239-610-4_30 DO - 10.2991/978-94-6239-610-4_30 ID - Paseband2026 ER -