Fordham Law Professor Daniel J. Capra, who serves on the Judicial Conference Advisory Committee on Evidence Rules, tells Bloomberg Law that he is currently working on two proposals that would address issues about “deepfakes” and the reliability of materials generated by AI tools.
The Advisory Committee on Evidence Rules voted Friday to work further on proposals that would address issues about “deepfakes” and the reliability of materials generated by AI tools.
US District Judge Jesse Furman, the committee’s chairman, suggested that the panel could work on a rule on deepfakes — videos or images created by AI that can accurately replicate humans — but sideline it for now, as the issue doesn’t seem to be a prominent one in courts.
“We ought to have it on our radar at a minimum,” Furman said.
Daniel Capra, a Fordham University School of Law professor who serves as the committee’s reporter, said that if the panel finds that deepfakes are becoming an issue, “then there’d be something that we could go back to to start with, instead of starting from ground zero.”
The committee chose not to take up other outside proposals made by academics on how the federal rules of evidence could be changed to address deepfakes. Instead, Capra said he will continue to work on a proposal he made, which would address how courts would handle a challenge by a party that computer-generated or other electronic evidence has been altered or fabricated by AI.
The other proposal, which the committee plans to act on more quickly, would deal with testimony by expert witnesses about machine-generated evidence. A draft by Capra suggested that such evidence be subject to the same requirements that a human expert witness would face.
Capra said he will work on both proposed rule changes ahead of the committee’s next meeting, currently scheduled for May.
Read “AI Evidence Rule Proposals Move Ahead at Federal Courts Panel” in Bloomberg Law.