IG1357 Trustable AI v1.0.0

Artificial intelligence (AI) technologies have become an integral part of data-based decision-making in all industries. In particular, for CSPs the technology promises to streamline processes, therefore significantly cutting costs, and to revolutionize interactions in customer-facing areas such as marketing, sales, and services. AI systems are already making a significant contribution to internal processes such as product development, risk analysis and network management.  The development of generative AI technologies is opening up new horizons, and the imagination of future applications is growing (see also How telcos could use gen AI to revitalize profitability and growth | McKinsey). This technology is used for customer service chatbots or hyper-personalization.

At the same time, concerns about the risks, manipulability and bias of AI have grown, whether it is about biased decisions, incorrect or sensitive language used by chatbots or cybersecurity. It is not only the forthcoming EU AI regulation that is prompting companies to think about robust but also pragmatic governance. The last decade has seen industry associations, research institutes, companies and a growing number of governments release guides, frameworks, and/or principles on their use of AI. This includes Beijing Academy of Artificial Intelligence’s Beijing AI Principles, IEEE’s Ethically Aligned Design, Microsoft’s Responsible AI Standard, IBM’s Principles for Trust and Transparency, and Google’s AI Principles to name a few. Realistically assessing the risks of AI applications and minimizing them through appropriate measures ensures that Trustworthy AI can deliver real and measurable benefits.

This introductory document is to understand the impact of assessing the risks of AI applications and minimizing them through appropriate measures ensures that Trustworthy AI can deliver real and measurable benefits.

General Information

Document series: IG1357
Document version: 1.0.0
Status: Team Approved
Document type: Introductory Guide
Team approved: 08-Jan-2025
IPR mode: RAND
Published on: 13-Jan-2025
Date modified: 08-Jan-2025