Tri Nguyen
Machine Learning Engineer
May 2, 2020 • 3 min read
As Artificial Intelligence (AI) becomes more and more integral to our daily lives, AI fairness has become one of the most - if not the most - important subject in the field. The topic has gained traction fast in the past few years due to the increasing number of incidents of AI systems making unfair decisions. Both in the public sector and in the private sector. It becomes evident that commercial-grade AI solutions should address the fairness challenge urgently.
In order to design a fair system, we must give the system a definition of what is fair. Unfortunately, fairness is not definite - even for humans. There is no universal definition of "fair" as it depends on the context of the situation.
For instance, a model that rates your job application based on your gender can be considered as unfair, while a model that recommends your diet should take into account your gender (if you allow it).
Although there are several metrics to quantify fairness, we believe that analyzing these metrics alone is not enough. In addition, we believe that the AI system should be transparent so that the users can understand whether the system makes a fair decision.
Aito cares about fairness and we would like to enable developers to create a fair system by using this two-fold approach:
Before building an AI system, we should try to detect if there is any bias in the training data. One good example of biased data is presented in this paper from Joy Buolamwini and Timnit Gebr which evaluates commercial gender classification systems based on facial images. They discovered that darker-skinned females have the highest error rate (34.7%) compared to the lowest error rate (0.8%) of lighter-skinned males. They also analyzed benchmark datasets and observed that darker-skinned female group is the least represented group in the dataset and usually has low image quality.
We believe that examining the data is a cost-efficient approach as it avoids having to readjust or even reimplement the system after it's been build if it is discovered to be unfair.
Aito provides the Relate API which can be used to detect bias. This notebook presents a detailed example of how you can quickly use Aito; not only detect bias in the data but also detect bias in the model's prediction.
As mentioned above, one challenge of building a fair AI system is that fairness depends on the context and on the perspective of the user. Hence, we believe that a fair AI system should be transparent and be able to explain each decision to let the end-user decide whether the decision is fair.
This poses a second challenge: Can the system be agile enough so that the user can make ad-hoc adjustments to achieve a fairer result? Would the system be able to quickly tell the user how much its performance is impacted by the adjustment?
We developed Aito with these goals in mind - using the reasoning layer and the query language.
Aito's reasoning layer can reveal how a feature impacts Aito's returned result. For example, in this German credit rating dataset, Aito can explain its prediction of whether a customer would have a good or bad credit rating based on the customer's information.
Aito's agility comes from its query language. The query language can be loosely described as a way of providing Aito the known information to predict the "unknown".
For instance, providing a customer's information such as age, gender, credit history, job, etc. to predict the customer's credit rating. By taking advantage of the query language, the user can now make ad-hoc adjustments such as withhold or providing more information to achieve a fairer result. Through the Evaluate endpoint, the system can instantly notify how much the adjustments impact on the performance and whether the system requires human assistance. This notebook demonstrates this process in the German credit rating use case.
As a company, we value transparency and agility and it is time we implement these principles in the development of AI systems. It is our duty to work towards systems that are fair and inclusive for all.
Back to blog listEpisto Oy
Putouskuja 6 a 2
01600 Vantaa
Finland
VAT ID FI34337429