I was inspired by TensorFlow staff who showed great support for Responsible AI. I emailed TensorFlow about Responsible AI after your online TensorFlow Summit a few months ago. It was great to get responses then from two TensorFlow employees, Miguel and Peter. They provided good advice then regarding Responsible AI and I connected with them today to let them know I was submitting to this Hackathon. These professionals inspired me to keep working with TensorFlow to create Responsible AI applications - Thanks again to them and TensorFlow!

The Reproducibility Tool promotes better ML design of apps and experiments. This results in a better foundation from which to make choices about Responsible AI. Bias and lack of Fairness or lack of Transparency may be caused by poor ML design so by ensuring good ML design through Reproducibility, Responsible AI will be improved.

I built the Reproducibility Tool based on the NeurIPS Reproducibility Challenge Checklist 2019 and the TensorFlow What If app. I coded the using TensorFlow 2.2 in CoLab to insert Reproduceibilty code and text into the TensorFlow 'Beginner' ML example application. link

This is an example of a ML app with RAITE code inserted into it according to the Reproducibility Checklist. Reproducabilty code improves the Responsible AI of the ML app by ensuring that others may run the same code and get the same results. Importantly, this means that any Responsible AI changes will solve Responsible AI problems, not ML design problems. A screenshot of the Reproducibility Tool is uploaded below. The Reproducibility Tool is the first tool in the Responsible AI Environment (RAITE). (RAITE is also a play on words referring to the 'Rate of Classification' or the 'Rate of Mis-classification' that is important to the evaluation of ML classification tasks.)

The main challenge I face is that the power has been cut in my building and I have run out of battery power in my PC! So i am submitting this before the deadline using my tablet! Regarding the submission, I wanted to create an interactive app to improve the design of ML experiments. This would in turn improve Responsible AI. The challenge is that the Design Of Experiments can be complicated and requires careful consideration of the tasks that are being examined, the model being tested, the architecture of the ML system and many other factors. (This is my long term goal and will take some work yet before I have completed the RAITE app completely). I solved the problem of heavy complexity in the RAITE app design by following the lead of NeurIPS who have designed a Reproducibility Checklist to improve the quality of ML submissions. If a developer can make their code and results Reproducible then they stand a better chance that many problems of Bias, Unfairness and lack of Interpretability will not be present; this means Responsible AI has been improved. Thus, the challenge was solved using the Reproducibility Checklist which I integrated into an interactive app that lets the user interact with their code and the checklist at the same time. The design of this app is based on the What If tools. A screenshot of the app is uploaded below.

I learned a lot about the 'Beginner' example code that runs the MNIST data set through a classifier. This is because I inserted Responsible AI code and text into this Beginner app as a prototype using RAITE and the Reproducibility Tool. The code executes in CoLab, within my hackathon submission app - Reproducibility Tool. Please see working code at this link: link. The CoLab page is called RAITE TensorFlow Boulay.ipynb

What's next for Responsible AI Testing Environment (RAITE)? Now I can continue to develop methods that will improve the experimental design of ML applications with the goal of improving Responsible AI. I will offer the audit and improvement of ML experiments and industrial applications as a consulting service to those companies that need to comply to new Responsible AI policy, audits and legislation as is required in the UK, EU, Canada and the US. The RAITE Reprodicibilty Tool for Responsible AI will be used to improve and promote Responsible AI for myself and clients who want to make their ML applications on par with NeurIPS guidelines and TensorFlow high standards for Responsible AI. (. . .Of Course I would like to be asked to pitch this app to TensorFlow - that is what I really want to be next for RAITE!!!. . .).

Thanks, Alain Joseph Boulay

Built With

  • 2.2
  • colab
  • keras
  • tensorflow
Share this project:

Updates