The Pentagon’s research arm is looking for cutting-edge techniques to disrupt, fool or undermine the systems that help computers “see”—to ultimately fuel future improvements.
According to a recently unveiled Artificial Intelligence Exploration Opportunity, the Defense Advanced Research Projects Agency wants proposals for innovative technical research concepts to throw off neural network-based machine vision technology without any insights into how systems were trained or built.
“Development and exploration of universal disruption techniques, including the scientific phenomena that enables their success, will enhance our understanding of the inherent nature of neural net architectures and inform more robust approaches,” officials wrote in their announcement.
Loosely based on the biological neural networks that make up human and animal brains, deep neural nets can essentially be trained to perform a range of classification and prediction tasks and can “learn” and adapt along the way. The agency notes that Convolutional Neural Nets, or CNNs, are what initially boosted the utility of computer recognition, and roughly over the last decade, artificial intelligence-infused machine vision “has improved and progressed, achieving superhuman performance with real-time executable codes that can detect, classify and segment within a complicated image.” The CNN paradigm involves a multi-layer network of computational nodes that are trained with massive amounts of labeled images to produce highly accurate object detection and scene classification capabilities.
Although deep neural net architectures have accelerated progress in machine vision applications over recent years, DARPA’s program manager for the project Gregory Avicola told Nextgov Tuesday that a large body of work now exists and is evolving “in the art of deceiving machine vision systems with techniques that have no impact on a human o ..