Enabling Trustworthy AI Model Lifecycle

Enabling a Trustworthy AI-Model Lifecycle

New solutions for assessing and monitoring the impacts of Artificial Intelligence (AI) systems should be adopted to ensure and facilitate positive impact in a complex and evolving legal and regulatory environment. The AI model lifecycle assessment is a technical evaluation that helps identify and address potential risks and unintended consequences of AI systems across businesses, to engender trust and build supportive systems around AI decision making. To enable a trustworthy AI model lifecycle, a series of qualitative and quantitative checks are performed to ensure trustworthiness across all AI model lifecycle phases. YAGHMA helps companies move from Trustworthy AI principles and values to practice (and build trust) by defining and implementing solutions across the AI model lifecycle.

AI system lifecycle phases in our work involve: i) ‘design, data and models’; which is a context-dependent sequence encompassing planning and design, data collection and processing, as well as model building; ii) ‘verification and validation’; iii) ‘deployment’; and iv) ‘operation, usage and monitoring’. Figure 1 illustrates the AI model lifecycle phases. These phases often take place in an iterative manner and are not necessarily sequential. The decision to retire an AI system from operation may occur at any point during the operation, use and monitoring phases

YAGHMA supports AI actors across AI model lifecycle in creating trustworthy AI systems, from designing, modelling, analysing, and developing to operating, integrating, deploying and updating AI systems. During each phase of the AI’s lifecycle, the AI Impact Assessment Tool goes through a number of processes to ensure the improvement of an AI system based on prioritized values.

Our AI Impact Assessment Tool receives feedback from each phase in order to measure and manage both positive and negative impacts of the AI system against ethical, legal, societal and environmental aspects and requirements. At each phase, the AI’s stakeholders are consulted to improve and adjust the system. Once trustworthy AI values priorities are defined, we assess them with a view of potential legal expectations and internationally applied ethical guidelines, such as those from EU AI Act, OECD and IEEE. Through this, the AI model’s ecosystem is continuously being adapted, implemented, and adopted: changing it over time.

The insights gained from AI Impact Assessment Tool and AI Taxonomy enables designing, deploying, and uptake of an AI system with explicit consideration for individual and societal trustworthy AI values, such as transparency, sustainability, privacy, fairness and accountability, as well as values typically considered in system engineering, such as efficiency and effectiveness.

Through our tools, YAGHMA guides companies in sharing sufficient and appropriate information about their AI systems with their stakeholders to build trust in AI across AI lifecycle ecosystems. For this, we build understanding for the ethical content of the AI solutions, enrich information about the extraction and prioritisation of core trustworthy AI’ values, improve AI solution’ explainability, and advise about the availability of collected information both during the AI system’s development and afterwards through the deployment and use phases of AI systems.

For More Information, contact Emad Yaghmaei.

Emad Yaghmaei

Senior Research

Consultant

Email: ey@yaghma.nl Mobile: +31 6 82425539