Assessing the Viability and Human Acceptability of Explanations generated by Predictive Models.
Aim of this project - Develop an interpretable classifier using a next-generation Neurosymbolic AI approach, “Logic Explained Networks” that produces first-order logic explanations for time-series data.