Last year, Dr. Hyrum Anderson, now the Principal Architect (Security Machine Learning) at Microsoft, and I designed a competition, and it was a huge success. It was launched at AI Village in August 2019 at DEFCON 27, where we invited contestants to participate in a white-box attack against static malware Machine Learning (ML) models.
Now that ML is even more widespread in detecting cyberthreats, the Machine Learning Security Evasion Competition is back with an improved game and more partners joining.
This year, Microsoft is sponsoring the event as part of their investments in the Trustworthy Machine Learning Initiative, and CUJO AI’s Vulnerability Research Lab is developing the framework to host the competition, enabling both the defensive and the offensive sides of the game.
We’re really excited about creating this opportunity to get AI companies, researchers, ML practitioners and cybersecurity professionals together to participate and exercise their defender–attacker muscles in a unique real-world setting.
The Twofold Challenge
Last year, the goal of the competition was to get 50 malicious Windows Portable Executable (PE) files to evade detection by three machine learning malware classifiers. Not only did the files need to evade detection, but they also had to maintain their exact original functionality and behavior.
The 2020 Machine Learning Security Evasion Competition (MLSEC) is similarly designed to experiment with the variety of ways ML systems may be evaded by malware, in order to better defend against these techniques.
The Defender Challenge will run June 15 through July 23. Participants will develop novel defences for attackers to evade. Submitted defences must pass real-world tests in detecting real-world malware at moderate false-positive rates.
The Attacker Challenge will run August 6 through September 18, providing a black-box threat model giving API access to hosted antimalware models, including those successfully developed in the Defender Challenge.
Contestants may discover how to evade them using “hard-label” query results. Samples from final submissions will be detonated in a sandbox to verify that they are still functional. In addition to evasion rates, the total number of API queries required by a contestant will factor into the final ranking.
The Impact of Competition Findings
Knowing that five scientists from Brazil and New Zealand have created an academic white paper based on the competition makes it even easier to confirm that we started something good here.
Multiple researchers in the 2019 MLSEC discovered approaches that completely and simultaneously bypassed three ML antimalware models. Dozens of other researchers achieved high scores for their work evading these open source models. Several top contestants published their findings.
– Dr. Hyrum Anderson
We hope to see more companies getting involved with the broader ML security community in the future, as businesses need practitioners and their unprecedented level of scrutiny to point out cybersecurity gaps.
The MLSEC project welcomes contributions and suggestions during and after the competition.
Watching the evasions evolve during the competition last year was an exciting event for us. We are sure this year will bring even more excitement to the table now that the contestants control both the defensive and offensive sides of the game. This will be the most anticipated sports event for me in August!
You can find the rules and information about participating on the official 2020 MLSEC website.
The two competition stages to mark on your calendars:
- [Defender Challenge]: Jun 15 – Jul 23, 2020 AoE
- [Attacker Challenge]: Aug 06 – Sep 18, 2020 AoE
Prizes for the winner and runner-up for each challenge will be a Grand Prize ($2500 in Azure credits) and a First Prize ($500 in Azure credits).