27/07/2022
Publication: Can Ontologies help making Machine Learning Systems Accountable?
Nowadays, even though the maturity of Artificial Intelligence (AI) technologies is rather advanced, its adoption, deployment and application are not as wide as expected. This could be attributed to several reasons, like cultural barriers, but most importantly, the lack of trust of potential users in such AI systems.
As our partner from Tekniker pinpoints in the paper ‘Can Ontologies help making Machine Learning Systems Accountable?’ trustworthy AI systems should not only be explainable, but also accountable. In specific, accountability can be defined as ‘the ability to determine whether a decision was made in accordance with procedural and substantive standards and to hold someone responsible if those standards are not met’. This actually means that with an accountable AI system, ‘the causes that derived a given decision can be discovered, even if its underlying model’s details are not fully known or must be kept secret’. As a result, the adequate representation of data, processes and workflows involved in AI systems could contribute to make them accountable.
In addition, there is a variety of technologies offering conceptual modelling capabilities to describe a domain of interest, nevertheless only ’ontologies combine this feature with Web compliance, formality and reasoning capabilities’.
As the usage of Semantic Technologies towards the achievement of Trustworthy AI has not been exploited yet in full potential, this article proposes an ontology-based approach aimed at providing Machine Learning systems with accountability, for the achievement of trustworthy AI systems.