Risks and explicability from artificial intelligence as non-thing

Authors

Abstract

Based on the premise that AI is composed of algorithms that act on data and adopting the deductive method, AI is assumed to be a non-thing and the risks arising from AI systems are discussed, especially the risks of not understanding an algorithm and automated decision-making. Thus, the need to achieve explainability in AI systems, or Explainable AI or XAI, is presented and addressed, encompassing responsibility, complexity, verifiability, and transparency. The article has a legal-philosophical conception based on Byung-Chul Han's (2022) philosophy on non-things, addressing the risks arising from AI systems and the explainability of such systems as elements that permeate the future of the application of AI systems in the Age of Non-Things.

Author Biography

Cinthia Obladen de Almendra Freitas, Pontifícia Universidade Católica do Paraná

Doutora em Informática pela Pontifícia Universidade Católica do Paraná, Brasil, Professora Titular da Escola de Direito e do Programa de Pós-Graduação em Direito da mesma instituição, cinthia.freitas@pucpr.br.

Published

2025-02-01