-Welcome to the results repository for our paper "[*Assertiveness-based Agent Communication for a Personalized Medicine on Medical Imaging Diagnosis*](https://doi.org/10.1145/3544548.3580682)" ([10.1145/3544548.3580682](https://doi.org/10.1145/3544548.3580682)) in [proceedings of the 2023 CHI Conference on Human Factors in Computing Systems](https://dl.acm.org/doi/proceedings/10.1145/3544548) ([CHI '23](https://chi2023.acm.org/)) presented during the "[AI in Health](https://programs.sigchi.org/chi/2023/program/session/97368)" track. In this paper, we explore how human-AI interactions are affected by the ability of an AI agent to not only incorporate granular patient information from the AI outputs (*e.g.*, [`dataset-uta7-annotations`](https://github.com/MIMBCD-UI/dataset-uta7-annotations), [`dataset-uta11-rates`](https://github.com/MIMBCD-UI/dataset-uta11-rates), or [`dataset-uta11-findings`](https://github.com/MIMBCD-UI/dataset-uta11-findings) repositories) but also exploring how to adapt the communication tone (*i.e.*, more assertive or suggestive) depending on the medical experience (*i.e.*, novice or expert) of the clinician. Specifically, we compare the AI outputs that explain to clinicians some clinical arguments (*e.g.*, [`dataset-uta7-co-variables`](https://github.com/MIMBCD-UI/dataset-uta7-co-variables), or [`dataset-uta11-findings`](https://github.com/MIMBCD-UI/dataset-uta11-findings) repositories) with more granular information about the patient regarding the lesion details, to a conventional agent (*i.e.*, [`prototype-breast-screening`](https://github.com/MIMBCD-UI/prototype-breast-screening) repository) that only provides numeric estimates (*e.g.*, [BIRADS](https://radiopaedia.org/articles/breast-imaging-reporting-and-data-system-bi-rads) and [accuracy](https://radiopaedia.org/articles/validation-split-machine-learning?lang=us)) of the classification. The study was conducted using a dataset of medical images (*e.g.*, [`dataset-uta7-dicom`](https://github.com/MIMBCD-UI/dataset-uta7-dicom), or [`dataset-uta11-dicom`](https://github.com/MIMBCD-UI/dataset-uta11-dicom) repositories) and patient information, where the AI models (*e.g.*, [`densenet-breast-classifier`](https://github.com/MIMBCD-UI/densenet-breast-classifier), [`ai-classifier-densenet161`](https://github.com/MIMBCD-UI/ai-classifier-densenet161), [`ai-segmentation-densenet`](https://github.com/MIMBCD-UI/ai-segmentation-densenet), or [`ai-nns-mri`](https://github.com/MIMBCD-UI/ai-nns-mri) repositories) were trained to classify the images based on various features. The data and source code used in this study are available in this repository, along with a detailed explanation of the methods and results. We hope that this work will contribute to the growing field of human-AI interactions in the medical field, and help to improve the communication between AI systems and clinicians.
0 commit comments