The Bletchley Declaration states: "AI presents enormous global opportunities: it has the potential to transform and enhance human wellbeing, peace and prosperity. To realise this, we affirm that, for the good of all, AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. We welcome the international community’s efforts so far to cooperate on AI to promote inclusive economic growth, sustainable development and innovation, to protect human rights and fundamental freedoms, and to foster public trust and confidence in AI systems to fully realise their potential."
The Declaration goes on to recognise that AI is already being deployed in lots of walks of life and this is a unique time to ensure the safe development and use of AI for the good of all. The Declaration recognises significant risks - with the recognition that protecting human rights, transparency, explainability, fairness, accountability, regulation, safety, human oversight, ethics, bias mitigation, privacy and data protection need to be addressed. There are also risks from the manipulation of content or generating deceptive content. These risks need to be urgently addressed.
The Declaration notes the particular safety risks arising from 'frontier' use of AI. The Declaration defined these risks as being "those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks - as well as relevant specific narrow AI that could exhibit capabilities that cause harm - which match or exceed the capabilities present in today’s most advanced models". Substantial risks may arise from potential misuse or unintended issues of control relating to alignment with human intent - partly because those capabilities are not fully understood and so harder to predict.
Noted in the Declaration as risks of especial concern were those in domains such as cybersecurity and biotechnology, as well as circumstances in which frontier AI systems could increase risks such as disinformation. The significant capabilities of the AI models involved could lead to catastrophic harm. Therefore, it is an urgent need to deepen our collective understanding of the potential risks.
The Declaration went on to emphasise the need for international co-operation to address the risks presented by AI. At the international level, the Declaration encourages nations to implement pro-innovation and proportionate governance and regulation that maximises the benefits and takes account of the risks. There should be future international AI Safety Summits beyond this initial one hosted by the UK, as part of the efforts of the participating countries to co-operate on this topic.
The countries or regions that signed up are: Australia, Brazil, Canada, Chile, China, the EU, France, Germany, India, Indonesia, Ireland, Israel, Italy, Japan, Kenya, Netherlands, Nigeria, Philippines, Rwanda, Saudi Arabia, Singapore, South Korea, Spain, Switzerland, Turkey, Ukraine, UAE, UK, USA.
It was a fantastic achievement for the UK to host this initiative, and some of the people who took part include Government officials from leading countries, as well as individuals involved in AI business and research.
One sector where there is potential for so much good as well as a variety of significant risks, as mentioned in the Declaration, is pharma and life sciences. The 2024 PING Conference is entitled: AI in Pharma - Threat or Opportunity? We will be hearing from leading speakers on topics such as: