Abstract 

Maternal, sexual and reproductive healthcare (MSRH) are sensitive urgent public health issues that require timely trustworthy authentic medical responses. Unfortunately, curative healthcare systems of Low Middle-Income Countries (LMICs) are insufficiently responsive to such healthcare needs. Such needs vary among social groups often founded on social inequalities like income, gender and education. Therefore, health information seekers turn to unregulated online healthcare platforms, social media and Large Language Modes (LLMs) which unregulatedly provide unverified healthcare information. 

This work systematically examined the philosophical foundations of responsible data and Artificial Intelligence (AI) practices governing data and AI modelling for intelligent systems based on peer-reviewed articles, book chapters, technical reports, and studies published between 1973 and 2022. These studies were restricted to the philosophy of AI and Society 5.0 to inform the derivation of over 29 forms of AI philosophies with their fundamental relationships with Society 5.0. This unveiled intrinsic manifestations of algorithmic unfairness arising from inequitable AI and Machine Learning (ML) training datasets besides the irresponsible data and AI modelling practices. 

We further traced this algorithmic unfairness to the unguided and unregulated AI industry practices propagated by the selection of inappropriate research paradigms to inform the creation of specific AI and ML training datasets for building intelligent healthcare systems. Such systems included online platforms and chatbots designed to provide authentic timely responses that inform healthcare decision-making among vulnerable online information seekers like teenagers and young women across various social groups. This was further traced to the need for responsible and Inclusive Intersectional AI practices and research approaches to creating ML and AI training datasets for equitable intelligent healthcare systems.  Therefore, we intersectionally crowdsourced maternal healthcare advice from over 500 verified practising healthcare professionals from Lira University Teaching Hospital, Brac University and Brac Uganda’s health program Versus their online social acquaintances within their social networks to create a dataset based on responsible data practices. We also scrapped, curated and annotated MSRH data from and about African contexts. This data can be used to not only finetune existing health intelligent systems but also develop responsible software systems that are contextually relevant to LMICs in Africa.  

We have implemented trustworthy medical sentiment analysis with local interpretable model agnostic explanations as responsible AI principles to distinguish between authentic and non-authentic maternal healthcare advice. Surprisingly, we obtained a train set accuracy of 93% and a validation set accuracy of 56%, a generalization log loss of 0.259, a generalization balance accuracy of 83% and a generalization Area Under the Curve of 90% meaning our models performed perfectly well at evaluating context and sentiments but failed to accurately distinguish between authentic and non-authentic medical advice. This reveals computational uncertainty among AI-driven models in healthcare. It also means that AI models cannot certainly distinguish between authentic and non-authentic medical advice hence a need for better conversational AI techniques and online healthcare tools to conversationally disseminate authentic medical advice. As we make our responsible medical corpus openly available for researchers to work with, we embarked on creating conversational AI techniques for leveraging conversational AI tools like ChatGPT by the information seekers through prompt engineering and Retrieval-Augmented Generation. The prompt engineering techniques have been published and made openly available for the general public to responsibly guide health information seeks however there is an urgent need for policy, guidelines and regulation of online healthcare practice. 

Keywords:Artificial Intelligence (AI), Conversational AI, Responsible AI, Large Language Models (LLMs), Maternal Health, Health Equity.