From categorization to impact assessment for Digital Health and AI
As Artificial Intelligence (AI) becomes increasingly prevalent in the health sector, the need to assess and report the impact of AI on the sector and its stakeholders (i.e. patients, clinicians, non-professional caregivers, etc.) becomes ever more important. The first of its kind, YAGHMA’s categorization tool, AI Taxonomy, paves the way for healthcare professionals, policy makers and other healthcare stakeholders to make informed decisions on AI’s societal impacts.
From the AI Impact Assessment Tool results, YAGHMA develops the sector specific AI Taxonomy tool, here for the healthcare sector. AI Taxonomy for the healthcare sector is a categorization/classification tool to help actors (incl. clinicals and health professionals), health data processors and managers to make informed decisions on design, development, deployment, and use of AI systems. AI Taxonomy is made up of a list of trustworthy features and business actions for AI systems with performance criteria for their contribution to ethical, legal, societal, health and wellbeing and governance objectives.
Through the AI Taxonomy, YAGHMA defines disclosures and features of AI systems in the health domain that can be considered socially desirable, environmentally friendly, as well as ethically and legally acceptable. Over time, the list of disclosures, features, and business actions will be as comprehensive as possible, covering all relevant parts of the health domain, including Digital Health.
The AI Taxonomy specific for the Digital Health domain, informs Digital Health and AI actors over their AI governance performance on ethical, legal, societal, health and wellbeing aspects. The performance criteria include quantitative and qualitative thresholds to assess whether domain specific actors (e.g. health actors) meet the performance of Trustworthy AI features and business actions. The AI Taxonomy as a result of the AI Impact Assessment Tool provides a fine-tuned and tailormade value categorization, listing AI value priorities for AI systems in healthcare.
Overall, the AI Taxonomy for health provides clarity via a common language for health and AI actors, helps translate commitments to the globally approved healthcare and big data regulations, saves time and money for health providers and investors, and puts Trustworthy AI systems in a readily accessible context. Furthermore, the tool could help avoid reputational risks linked to designing, developing, deploying, and using non-trustworthy AI systems in the health domain by screening out AI features that undermine broader ethical, legal, societal, health and wellbeing objectives, such as bias, privacy or well-being.
HOW CAN THE AI TAXONOMY BE USED ?
The tool guides healthcare providers, policy makers, and industrial entrepreneurs to adopt social, ethical, legal and regulatory requirements into practical recommendations for the lifecycle of AI solutions within the health sector. The disclosure section of the AI Taxonomy is intended to align with the requirements of approved globally recognized sets of responsible AI principles from the EU AI Act, IEEE and the OECD.
Together with YAGHMA’s AI Impact Assessment, the AI Taxonomy will ensure that all relevant requirements and developments for societal and health impacts are considered, while also ensuring that businesses address the different stages of the AI lifecycle, ensuring high-quality AI lifecycle aligned with ethical and regulatory health requirements. Healthcare providers can thus better inform healthcare related actors (e.g. patients, health professionals) on the non-technical and non-financial aspects of the AI technology.
For more information on the AI Taxonomy, contact Emad Yaghmaei.
Senior Research Consultant
Mobile: +31 6 82 42 55 39