Risks and explicability from artificial intelligence as non-thing
Abstract
Based on the premise that AI is composed of algorithms that act on data and adopting the deductive method, AI is assumed to be a non-thing and the risks arising from AI systems are discussed, especially the risks of not understanding an algorithm and automated decision-making. Thus, the need to achieve explainability in AI systems, or Explainable AI or XAI, is presented and addressed, encompassing responsibility, complexity, verifiability, and transparency. The article has a legal-philosophical conception based on Byung-Chul Han's (2022) philosophy on non-things, addressing the risks arising from AI systems and the explainability of such systems as elements that permeate the future of the application of AI systems in the Age of Non-Things.Downloads
Published
2025-02-01
Issue
Section
Seção A: Artigos Convidados (Invited Papers)