In Jigsaw’s fourth Kaggle competition, we return to the Wikipedia Talk page comments featured in our first Kaggle competition. When we ask human judges to look at individual comments, without any context, to decide which ones are toxic and which ones are innocuous, it is rarely an easy task. In addition, each individual may have their own bar for toxicity. We’ve tried to work around this by aggregating the decisions with a majority vote. But many researchers have rightly pointed out that this discards meaningful information.
A much easier task is to ask individuals which of two comments they find more toxic. But if both comments are non-toxic, people will often select randomly. When one comment is obviously the correct choice, the inter-annotator agreement results are much higher.
In this competition, we will be asking you to score a set of about fourteen thousand comments. Pairs of comments were presented to expert raters, who marked one of two comments more harmful — each according to their own notion of toxicity. In this contest, when you provide scores for comments, they will be compared with several hundred thousand rankings. Your average agreement with the raters will determine your individual score. In this way, we hope to focus on ranking the severity of comment toxicity from innocuous to outrageous, where the middle matters as much as the extremes.
Can you build a model that produces scores that rank each pair of comments the same way as our professional raters?
Disclaimer: The dataset for this competition contains text that may be considered profane, vulgar, or offensive.
Awards:-
- 1st Place – $12,000
- 2nd Place – $10,000
- 3rd Place – $8,000
- 4th Place – $5,000
- 5th Place – $5,000
- 6th Place – $5,000
- 7th Place – $5,000
Deadline:- 31-01-2022