News

post thumbnail
23/05/2023

Delivering Objective Measurement: AI-PROFICIENT’s Validation Methodology

In the Deliverable 6.1 ‘Validation methodology, ethical and acceptance criteria’, Pedro de la Peña, Alexandre Voisin, Kerman Lopez de Calle, Julien Hintenoch, Sirpa Kallio, Christophe Van Loock, Katarina Stanković, Dea Pujić, Vasillis Spais, Karen Fort, and Marc Anderson mark a significant milestone within the project’s use case evaluation and ethical considerations, by focusing on the creation of a validation methodology aimed at establishing objective measurement criteria for the AI-PROFICIENT’s outcomes.

To achieve this objective, the AI-PROFICIENT team leveraged information primarily gathered in WP1 (Pilot site characterization, requirements and system architecture) and specifically in D1.4 (Project requirements and performance assessment KPIs), which incorporates a comprehensive list and description of various user requirements associated with the eight use cases developed in the project.

In specific, the authors propose a generic methodology to measure the degree of compliance of different AI modules deployed in an industrial facility. This is actually an initial version of the methodology, which will be further developed in subsequent iterations to compile the progress made in the project. As the AI-PROFICIENT team explains, the aim is to achieve the most objective measurement possible for the results obtained from the application of AI-based developments in production lines. In cases where objectivity is challenging, the methodology incorporates the execution of surveys among multiple users to gather their assessments and minimize biases.

Read the deliverable here