With increasing automation and the continuous development of machine learning, modern
algorithms are now used in almost all areas to improve and simplify workflows. Recommendation
Systems (RS) are one group of these algorithms. They enable automated suggestions
of items based on the interests of the user. In this work, we will focus on the investigation
of biases in recommendation systems for news. News Recommendation Systems (NRS)
provide a way to suggest targeted news to users according to their needs. As news is the
primary source of information, it is imperative that it is presented fairly and free from bias.
Thus, in addition to good recommendations for the user, novelty and diversity should be
ensured by the NRS. For this purpose, several experiments are conducted with the MIND
dataset, which has collected 1,000,000 users’ data on MSN. This work gives an overview
of the different biases in the feedback loop of news recommendation systems. In particular,
data and model biases are examined and related to other user biases. The research
should enable a template for bias modeling. All tests are presented in a Github repository
https://github.com/LaKin314/Investigating_biases_in_News_Rec.
|