IEC, ISO and ITU respond to FLI open letter on safe and responsible AI development
International Standards can help ensure safe and responsible AI development
We read the recent open letter from the Future of Life Institute (FLI) with great interest and understand their concerns about the potential risks of AI systems with human-competitive intelligence. As the FLI pointed out, it is essential to address these challenges proactively as these systems can pose profound risks to society and humanity.
The open letter suggests that no advanced AI system should be released until the developer can demonstrate convincingly that it does not pose an undue risk. In this context, it is noteworthy that UNESCO’s 193 Member States adopted the first global normative instrument on the ethics of artificial intelligence in 2021 and that the OECD states that AI systems should be robust, secure and safe throughout their entire life cycle to function appropriately and avoid posing unreasonable risks to safety.
Developers are responsible for demonstrating that their systems meet these criteria. International standards and conformity assessment can play a crucial role in this process.
The work of the International Electrotechnical Commission (IEC), the International Organization for Standardization (ISO) and the International Telecommunication Union (ITU) in this domain addresses many of the concerns of society. Our standards can underpin regulatory frameworks and when adopted, can provide appropriate guardrails for responsible, safe and trustworthy AI development.
IEC and ISO have developed together a series of standards for AI that cover the entire AI ecosystem, including terminology, governance, risk management
The three organizations are collaborating with the Office of the UN High Commissioner for Human Rights on the implementation of recent guidance from the UN Human Rights Council on technical standards and human rights. Standards are building blocks for the design, development and deployment of technology, making it imperative that standardization processes incorporate human rights perspectives.
International standards are the result of a global consensus-building process and involve stakeholders from industry, academia, government and civil society.
These standards emphasize the importance of human oversight and accountability and provide a framework for the development and deployment of AI systems that are transparent, explainable, reliable and secure
Organizations can demonstrate their commitment to responsible AI and build trust with their stakeholders by adopting our standards. Moreover, the standards can help mitigate the risks associated with AI systems and ensure that they are aligned with societal values and expectations.
IEC, ISO and ITU can provide a platform to deploy existing standards and develop new ones that mitigate many of the issues raised. We welcome FLI members and all other stakeholders who are not already working with us to dialogue and to join us in setting these international consensus-based standards and encourage their adoption.
Sincerely,
Philippe Metzger, Secretary-General of IEC (international Electrotechnical Commission)
Sergio Mujica, Secretary-General of ISO (International Organization for Standardization)
Seizo Onoe, Director, Telecommunication Standardization Bureau, ITU (International Telecommunication Union)
Brain rental service for ISO certifications/accreditations.
2yIf you actually READ the letter, it says nothing. It restates the FLI request to "pause" AI development, but then never agrees to do so. Instead, the letter simply repeats the same old marketing language used by ISO and IEC, without really discussing AI or standards development at all. For the authors of the FLI letter, this was the most polite middle-finger ISO could have given someone.
Managing Director at TechnologyCare
2yThis will surely require "extreme" cooperation and a unified digital platform from where progress can be measured. I think in the physical world it is a lot easier to nail down standards. But in a world where "bots" are allowed to run free and provide guidance? We might need a new digital AI-UN with "bot" members to keep pace with the audit tasks at digital speed!
Engineering & Operations Leader | Chartered Engineer (CEng FInstMC) | NED | Energy, Nuclear, Automotive & Industrial Sectors | Standards & Safety Leadership | Mentor | Speaker | SDG Advocate | IEC Young Professional UK
2yV interesting to read! AI was the theme of last year's conference at IEC too. Good to see the follow up and hearing thoughts.
Certified IEEE AI Ethics Lead Assessor/AI Architect and Hard Law Influencer "Working to Protect Humanity from the potential harm A/IS may cause”. LinkedIn AI Governance, Risk and Conformity Group
2ySome of this true and the rest a one side view from the “Pay to Play” Standards Developers. All participants from various industries pay for their employees to participate in order to support their view of the world in the global marketplace. These standards are not free they cost hundreds of dollars and in most cases to abstract. Lastly, when these bodies embarked on the path of developing AI standards it was approached without understanding the gaps, what the AI management system is suggested to look like hence ISO 42001 still isn’t published. IEEE Introduces New Program for "Free" Access to AI Ethics and Governance Standards https://blended-learning.ieee.org/Portal/Catalog/ViewCourse/11852/IEEE-Awareness-Module-on-AI-Ethics