Cornell University: AI finds whites significantly less likely to be hateful than blacks on social media

Cornell University conducted an experiment using AI to flag social media posts that are hateful, racist, sexist or meant to harass another user.

Across five different datasets, black users of social media were universally found to engage in these behaviors at “significantly higher” rates than whites. The datasets are the factors used by the AI to determine which social media posts to flag.

So what is their conclusion? The “systematic racial bias” of all the people who made the datasets.

In other words. The experiment was never conducted in good faith. The sole purpose was to demonize white people no matter what results they got.

The experiment was conducted by a male researcher from Germany, a female researcher from India, and another white male graduate student.

The five different datasets used were created by people of different racial and ethnic backgrounds. The list of names is heavily Asian. Two of the datasets were largely created by an Arab researcher in the UK. Another comes from Israeli researchers. One was spearheaded by a female researcher in Greece.

Yet, all around the world, the entire computer science research industry is just teeming with systematic anti-black bias.

Here is an easy social media experiment. I typed “white people” in Twitter and got pages and pages of tweets mocking, insulting, and demonizing white people. Then I typed “black people.” What I got was basically the same thing. Pages and pages mocking, insulting, and demonizing white people.

Be the first to comment

Leave a Reply

Your email address will not be published.


*