From credit ratings to housing allocation, machine learning models are increasingly used to automate 'everyday' decision making processes. With the growing impact on society, more and more concerns are being voiced about the loss of transparency, accountability and fairness of the algorithms making the decisions. We as data scientists need to step-up our game and look for ways to mitigate emergent discrimination in our models. We need to make sure that our predictions do not disproportionately hurt people with certain sensitive characteristics (e.g., gender, ethnicity).
This is a companion discussion topic for the original entry at https://godatadriven.com/blog/towards-fairness-in-ml-with-adversarial-networks/