After a long time, people started their normal life again. They start visiting markets, workplaces, stations, or other public places with minor changes in their lifestyle. The government has defined some rules for the public that they have to follow while visiting public sites to make the condition under control without affecting their running life. But so many people are not following the rules. A little irresponsible behavior of a single person can affect the life of the whole society. It is challenging for the government to monitor each person manually.
Artificial Intelligence(AI) can help the organization, government, and industries monitor the places and track the people who are not following the rules. The system will generate an alert, and then authorities can take action accordingly and reduce the spread of the virus. This makes it easy for the police to monitor things 24*7.
Such as AI can examine whether the person wears a mask or not, and they are maintaining the required social distance as per the guidelines. When the system recognizes any of the people that are not following the rule, it can alert the authorities. This may make sure that the risk of spreading the virus fairly small.
Akira AI comes with a solution to track and analyze people whether they are following rules or not. It provides the answer to some questions:
Akira AI provides a complete solution to the end customer with its vast features of Explainable AI. "COVID-19 Tracker" system obeys all the Principles of Explainable AI and provides a great customer experience. It provides answers for all user questions by giving the interpretable system that is transparent and easy to understand. Akira AI is using a step by step approach to provide interpretability. Customers can easily understand the working of the system of how the system generates a separate output. By providing transparency to black-box models, Akira AI helps to gain user trust and confidence. It includes a performance with explainability. For the "COVID-19 Tracker" system, Akira AI uses an opaque model to gain better performance and accuracy than the transparent model. But also provide explainability to solve all the user queries.
It is a human tendency that whenever we use machines, we ask for it working on how it works only then we can believe in it. Numerous questions come into the customer's mind when he/she uses AI systems. But it becomes difficult for the system to answer all these questions when the model is opaque. Some of these questions are:
"How to take your business back to better, not just back to normal" - E&Y Company
Explainable AI helps to answer all the questions of the customer and provide a transparent and interpretable system. Akira AI provides Explainable AI that is using various frameworks and methodologies to answer those questions. There is a list of methods that can be used to answer this question. These are given in the following table:
To implement those methodologies, we have various packages and libraries. These libraries help to provide an answer to the customer by implementing the methodologies.
There are seven pillars based on Explainable AI that can help the government and industries track a person's irresponsible behavior. Below listed are the critical 7 AI pillars in brief:
Transparency: It provides complete transparency of the model, algorithm, and features used for prediction.
Example: What is the likelihood Mr. Jain has obeyed all the rules and taken COVID-19 precaution?
The system says that Mr. Jain is not following the rules because he is not wearing the mask. The interpretation of the system that provides using visualization.
Domain Sense: The explanation that the system provides makes sense in the application's domain and easy to understand for the user that will use it. It is of no worth if the user is not able to understand it. Therefore, Akira AI provides an explanation using visualization so that end-user can easily interpret it.
Consistency: Explanation should be consistent with any number of runs. It should not be changed when values are the same for each run of the code. An explanation will not change with any number of the run. It will always give the same result.
Parsimony: The explanation that the system provides should be straightforward. As there is a question of the end-user, what is the general rule that the system follows? The model provides its output in the simple text that easy to understand as if
Wear Mask= YES ‘AND’ Social distance >=100cm
It means a person is following the rules.
But it is not necessary to always provide the simplest explanation. Sometimes, it may be possible that there are various features if we use the same method(text). Also, then it becomes difficult for the user to understand that.
Generalizability: Explanation of the AI system should be generalized.
Example: The system decides that Mr. Jain is not following rules and explains it using features because he does not wear the mask. This explanation is not general for people's whole organization as some are detected because they do not maintain social distance. There are some stages of generalizability:
Local Model Generalizability: Instance level generalizability is known as local generalizability—examples: LIME, SHAP, etc.
Global Model Generalizability: Model-level generalizability is known as Model Generalizability. —examples: decision trees, rule-based model
Cohort level model: This is a type of global model. Here the explanation is generated at the level of the cohort.
Trust/Performance: With the model's trust and explainability, the model should also perform well.
As Figure depicts that with increasing explainability level of model accuracy decreases, that is not acceptable. Therefore it is vital to choose the model that should provide accuracy with explainability because we cannot compromise its accuracy.
In this use case, we use the Random forest algorithm that provides good accuracy and explainability. We use various methodologies that are already discussed.
Fidelity: It is necessary to have alignment between model and explanation.
Explainable AI is an excellent approach to gain customer's trust and confidence. It makes the system more trustable and interpretable. With this approach, we can make our system more productive by tracking performance, fairness, errors, data, etc.