When all the people know about the World’s largest manufacturer of microprocessors and chips, Intel, has decided to Contribute ti U.S. military research wing’s new initiative DARPA. Then the Chipmaker largest company Intel has been chosen to lead a new initiative led by the U.S. military’s research wing DARPA, aimed at improving cyber-defenses against deception attacks on the machine learning model.
Today, Protocol can report that DARPA has been selected 17 organizations will be work on the GARD project, including the Johns Hopkins University, Intel, Georgia Tech, MIT, Carnegie Mellon University, Intel will be leading one part of the project with Georgia Tech, focusing on defending against physical adversarial attacks.
The main motive of this initiative is to improve cyber-security against deception hijack on various machine learning models. Then Machine learning is a subsidiary of AI, Which means they artificial intelligence, they will be designed to improve the system with the new information, experiences, and data. But the machine learning algorithms can be altered due to deception attacks. Then Small alterations in real-world objects can have adverse consequences, in this case of automated vehicles.
In deception attacks, although rare, we can meddle learning algorithms. A subtle change to real-world object can, in the case of self-driving vehicles, they have disastrous consequences. A few weeks ago researchers tricked a Tesla into accelerating 50 miles per hour above intended speed by adding a two-inch piece of tape on a speed limit sign.
Where DARPA hopes to have come into play. The research arm said earlier this year that it will be working on a program Known as GARD, or the Guaranteeing AI Robustness against Deception. Intel will say today it’ll serve as the prime contractor for the four-year program alongside Georgia Tech. Jason Martin, principal engineer at Intel Labs who will lead the Intel GARD team said the chipmaker and Georgia Tech will work together to enhance object detection and it improves the ability for AI and machine learning to respond to adversarial attacks.