Towards fairness in ML with adversarial networks - GoDataDriven

From credit ratings to housing allocation, machine learning models are increasingly used to automate 'everyday' decision making processes. With the growing impact on society, more and more concerns are being voiced about the loss of transparency, accountability and fairness of the algorithms making the decisions. We as data scientists need to step-up our game and look for ways to mitigate emergent discrimination in our models. We need to make sure that our predictions do not disproportionately hurt people with certain sensitive characteristics (e.g., gender, ethnicity).

This is a companion discussion topic for the original entry at

Hi: Great blog post and I was really happy to be introduced to this topic via it. This may or may not help the cause further however I wrote up a few notes further on the same above and coded it up in R. My GitHub repository is here:

Perhaps might be an idea for me to add in a PR to your GitHub repository. Once again many thanks for the above.