There is a prize pool of $8,000. According to the metrics listed below, the top three teams on the leaderboard will receive monetary prizes. The final placement on the leaderboard will be determined by the performance average over two Held-Out evaluation Model Sets (each carrying a 50% weight). The first held-out evaluation updates the leaderboard on the fly, while the second is only used to determine the winning teams in the final round.
The average poisoned accuracy (PACC) increase will be used to determine the leaderboard, the average attack success rate (ASR) drop will be used to break ties, and submissions will be rejected if an evaluation model is detected with an accuracy (ACC) drop greater than 20%.
The winning team will also be invited to co-author a publication summarizing the competition results and give a short presentation at the Backdoor Attacks and Defense in Machine Learning (BANDS) workshop at ICLR’23 (virtual-event registration provided).
To be eligible for prizes, winning teams must submit their methods, code, and the names and affiliations of their team members. We are unable to grant awards to teams that appear on U.S. terrorist watch lists or are subject to sanctions. Additionally, we cannot send any money to Russia. However, Russian participants are still welcome to enter and are eligible for non-monetary prizes for winning entries (co-authorship and an invitation to present at the workshop). We reserve the right to disqualify any team or individual found to have violated any of the rules or regulations established by the competition’s organizers.