Thank you, Madam President.
We are grateful to Secretary-General António Guterres for taking part in this important debate, and my thanks also go to Mr. Jack Clark and Professor Yi Zeng for their valuable and impressive contributions.
"I believe it's only a matter of time before we see thousands of robots like me out there making a difference.
These are the words of the robot Ameca, speaking to a journalist at the "Artificial Intelligence for Good" conference just mentioned by the Secretary General, which took place two weeks ago in Geneva, organized by the International Telecommunication Union and Switzerland.
Artificial intelligence (AI) can be a challenge because of its speed and apparent omniscience, but it can and must serve peace. As we turn our attention also to a "New Agenda for Peace", it's in our hands to ensure that AI makes a difference to the benefit and not the detriment of humanity. In this context, let's seize the opportunity to lay the groundwork towards AI for good by working closely with cutting-edge science.
In this regard, the Swiss Federal Institute of Technology Zurich is developing a prototype of an AI-assisted analysis tool for the United Nations Operations and Crisis Centre. This tool could explore the potential of AI for peacekeeping, in particular for the protection of civilians and peacekeepers. In addition, Switzerland recently launched the "Swiss Call for Trust & Transparency initiative", where academia, private sector and diplomacy jointly seek practical and rapid solutions to AI-related risks.
The Council must also work to counter the risks to peace posed by AI. That's why we're so grateful to the UK for organizing this important debate. For example, let's look at cyber operations and disinformation. False narratives undermine public confidence in governments and peace missions. In this respect, AI is a double-edged sword: While it can accentuate disinformation, it can also be used to detect false narratives and hate speech.
So how can we harvest the benefits of AI for peace and security while minimizing the risks? I'd like to make three suggestions.
First, we need a common framework, shared by all the players involved in the development and application of this technology. Namely, governments, businesses, civil society and research organizations - I think Mr. Clark made this very clear earlier. This means governments, companies, and civil society and research organizations. AI does not exist in a normative vacuum. Existing international law - including the UN Charter, international humanitarian law and human rights - applies. Switzerland is involved in all UN processes aimed at reaffirming and clarifying the international legal framework for AI and, in the case of lethal autonomous weapon systems, at developing prohibitions and restrictions.
Second, AI must be human-centered. Or, as Professor Zeng has just put it, "AI should never pretend to be human". We call for its development, deployment and use to always be guided by ethical and inclusive considerations. Clear responsibility and accountability must be maintained, both for states and for companies or individuals.
And finally,, the relatively early stage of AI development offers us an opportunity to ensure equality and inclusion, and to counter discriminatory stereotypes. AI is only as good and reliable as the data we provide it with. If this data reflects prejudices and stereotypes - for example, of gender - or is simply not representative of its operational environment, AI will give us poor advice for maintaining peace and security. It is the responsibility of developers and users, both governmental and non-governmental, to ensure that AI does not reproduce the harmful societal biases we strive to overcome.
The Security Council has a responsibility to proactively monitor developments around AI and the threat it may pose to the maintenance of international peace and security. It should be guided by the results of the General Assembly on the related legal framework. The Council must also use its powers to ensure that AI serves peace, such as by anticipating risks and opportunities, or by encouraging the Secretariat and peace missions to use this technology in innovative and responsible ways.
My delegation used artificial intelligence for our first debate under our presidency, as well as in the context of an exhibition with the ICRC on digital dilemmas. We were able to recognize the impressive potential of this technology at the service of peace. We therefore look forward to making "artificial intelligence for good" an integral part of the "New Agenda for Peace".
I thank you.