Questions? +1 (202) 335-3939 Login
Trusted News Since 1995
A service for nanotechnology industry professionals · Friday, March 29, 2024 · 699,740,995 Articles · 3+ Million Readers

Bhusan Chettri's Launched Tutorials Series of AI, Machine Learning, Deep Learning and Their Interpretability

Dr. Bhusan Chettri

Dr. Bhusan Chettri

Dr Bhusan Chettri who earned his PhD from Queen Mary University of London aims at providing an overview of Machine Learning and AI interpretability.

LONDON, UNITED KINGDOM, September 24, 2022 /EINPresswire.com/ -- Dr Bhusan Chettri who earned his PhD from Queen Mary University of London aims at providing an overview of Machine Learning and AI interpretability. For same Bhusan Chettri has Launched Tutorials Series of AI, Machine Learning, Deep Learning and Their Interpretabilities.

In his first tutorial, Bhusan Chettri is focussed on providing an in-depth understanding of IML from multiple standpoint taking into consideration different usages (use-cases), different application domains and emphasising why it is important to understand how a machine learning model demonstrating impressive results make their decisions. The tutorial also discusses if such impressive results are trustworthy to be adopted by humans for use in various safety-critical businesses for example: medicine, finance and security. Visiting first part of this tutorial series of AI, Machine Learning, Deep Learning and their Interpretability on his official website will give a better idea.

Bhusan Recently published his second tutorial, where he seems providing an overview of Interpretable Machine Learning (IML) a.k.a Explainable AI (xAI) taking into account safety-critical application domains such as medicine, finance and security. The tutorial talks about the need for explanations from AI and Machine Learning (ML) models by providing two examples in order to provide a good context about the IML topic. Finally, it describes some of the important concepts a.k.a criterias that any ML/AI model in safe-critical applications must satisfy for their successful adoption in real- world setting. But, before getting deeper into this edition. It is worth revisiting briefly the first part of this tutorial series of AI, Machine Learning, Deep Learning and their Interpretability.

Part-1 mainly focussed on providing an overview about various aspects related to AI, Machine Learning, Data, Big-Data and Interpretability. It is a well known fact that data is the driving fuel behind the success of every machine learning and AI applications. The first part described how vast amounts of data are generated (and recorded) every single minute from different mediums such as online transactions, use of different sensors, video surveillance applications and social media such as Twitter, Instagram, Facebook etc. Today’s fast growing digital age that leads to generation of such massive data, commonly referred as Big Data, has been one of the key factors towards the apparent success of current AI systems across different sectors.

The tutorial also provided a brief overview of AI, Machine Learning, Deep Learning and highlighted their relationship: deep learning is a form of machine learning which involves use of artificial neural network with more than one hidden layers for solving a problem by learning patterns from training data; machine learning involves solving a given problem by discovering patterns within the training data but it does not involve use of neural networks (PS: machine learning using neural networks is simply referred as deep learning); AI is a general terminology that encompasses both machine learning and deep learning. For example, a simple chess program which involves a sequence of hard-coded if-else rules defined by a programmer can be regarded as an AI which does not involve use of data i.e there is no data-driven learning paradigm. To put it in simple terms, deep learning is a subset of machine learning and machine learning is a subset of AI.

The tutorial also briefly talked about the back-propagation algorithm which is the engine of neural networks and deep learning models. Finally, it provided a basic overview of IML stressing their need and importance towards understanding how a model makes a judgment about a particular outcome. It also briefly discussed a Post-hoc IML framework (that takes a pre-trained model to understand their behavior) showcasing an ideal scenario with a human in a loop for making the final decision of whether to accept or reject the model prediction or a particular outcome.

In recent tutorial Bhusan Chettri provided an insight on xAI and IML taking into consideration safe-critical application domains such as medicine, finance and security where deployment of ML or AI requires satisfaction of certain criterias (such as fairness, trustworthiness, reliability etc). To that end, Dr Bhusan Chettri who earned his PhD in Machine Learning and AI for Voice Technology from QMUL, London described why there is a need for interpretability on today’s state-of-the-art ML models that offer impressive results as governed by a single evaluation metric (for example classification accuracy). Bhusan Chettri elaborate this in detail by taking two simple use cases of AI systems: wild-life monitoring (a case of dog vs wolf detector) and automatic tuberculosis detector. He further detailed how biases in training data can affect models from being adopted in real-world scenarios and that understanding training data and performing initial data exploratory analysis is equally crucial so as to ensure models behave reliably in the end during deployment. Stay tuned for more on the topics of explainable AI. The next edition of this series shall discuss different taxonomies of interpretable machine learning. Furthermore, various methods of opening black-boxes: towards explaining behavior of ML models shall be described. Stay Tuned to his website for more updates.

Bhusan Chettri
NA
email us here

Powered by EIN Presswire
Distribution channels: Education, IT Industry, Technology


EIN Presswire does not exercise editorial control over third-party content provided, uploaded, published, or distributed by users of EIN Presswire. We are a distributor, not a publisher, of 3rd party content. Such content may contain the views, opinions, statements, offers, and other material of the respective users, suppliers, participants, or authors.

Submit your press release