person
Author: Process Fellows
Examples of self-reinforcing predictions (and their potential solution):
- Personalized advertising: An algorithm shows a user more products from a certain category because they have clicked on it once. As a result, other potentially relevant products are no longer suggested.
- Crime prediction: A model for combating crime prioritizes areas with a high police presence. More police officers in this area report more crimes, which leads the model to believe that even more checks are needed there.
- Social media & filter bubbles: Recommendation algorithms preferentially show content that users already like, resulting in them only seeing biased perspectives.
Solutions to mitigate feedback loop bias
- Random exploration: Incorporate random recommendations or decisions to consider new data sources.
- External data sources: Incorporate not only past model decisions, but also independent data sources into training.
- Fairness checks: Perform bias measurements to identify whether certain groups or information are systematically disadvantaged.
- Model refresh: Regular retraining with new, more diverse data to reduce bias.