After developing a tool for testing the security of its own AI systems and assessing them for vulnerabilities, Microsoft has decided to open-source it to help organizations verify that that the algorithms they use are “robust, reliable, and trustworthy.”
Counterfit started as a collection of attack scripts written to target individual AI models, but Microsoft turned it into an automation tool to attack multiple AI systems at scale.
“Today, we routinely use Counterfit as part of our AI red team operations. We have found it helpful to automate techniques in MITRE’s Adversarial ML Threat Matrix and replay them against Microsoft’s own production AI services to proactively scan for AI-specific vulnerabilities. Counterfit is also being piloted in the AI development phase to catch vulnerabilities in AI systems be ..
Support the originator by clicking the read the rest link below.