AID: Attesting the Integrity of Deep Neural Networks
TimeTuesday, December 7th11:38am - 12:00pm PST
Event Type
Research Manuscript
AI/ML System Design
DescriptionDue to their crucial role in many decision-making tasks, Deep Neural Networks (DNNs) are common targets for a large array of integrity breaches. In this paper, we propose AID, a novel methodology to validate the integrity of DNNs. AID generates a set of test cases called edge-points that can reveal if a model has been compromised with access to the top-1 prediction output.
Experimental results show that AID is highly effective and reliable. With at most four edge-points, AID is able to detect eight representative integrity breaches including backdoor, poisoning, and compression attacks, with zero false-positive.