The United States has signed the “Framework Convention on artificial intelligence and human rights, democracy, and the rule of law” sponsored by the Council of Europe and touted as the first legally binding treaty on AI safety.
The treaty was opened for signature Sept. 5 at a Council of Europe justice ministers meeting in Vilnius, Lithuania, and was “signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom as well as Israel, the United States of America and the European Union,” according to a COE announcement.
It was negotiated by the COE member states, the U.S. and 10 other nonmember states including Canada, Mexico, Japan, Australia and Israel.

Council of Europe Secretary General Marija Pejčinović Burić
The treaty must be submitted to the U.S. Senate for ratification. The Biden administration hasn’t announced timing for that step.
“The treaty provides a legal framework covering the entire lifecycle of AI systems. It promotes AI progress and innovation, while managing the risks it may pose to human rights, democracy and the rule of law. To stand the test of time, it is technology-neutral,” the statement said.
“The treaty will enter into force on the first day of the month following the expiration of a period of three months after the date on which five signatories, including at least three Council of Europe member states, have ratified it. Countries from all over the world will be eligible to join it and commit to complying with its provisions,” the COE explained.
Council of Europe Secretary General Marija Pejčinović Burić said: “We must ensure that the rise of AI upholds our standards, rather than undermining them. The Framework Convention is designed to ensure just that. It is a strong and balanced text -- the result of the open and inclusive approach by which it was drafted and which ensured that it benefits from multiple and expert perspectives. The Framework Convention is an open treaty with a potentially global reach. I hope that these will be the first of many signatures and that they will be followed quickly by ratifications, so that the treaty can enter into force as soon as possible.”
Framework details
The framework convention says signatories are “Convinced of the need to establish, as a matter of priority, a globally applicable legal framework setting out common general principles and rules governing the activities within the lifecycle of artificial intelligence systems that effectively preserves shared values and harnesses the benefits of artificial intelligence for the promotion of these values in a manner conducive to responsible innovation.”
It says the treaty “is intended to address specific challenges which arise throughout the lifecycle of artificial intelligence systems and encourage the consideration of the wider risks and impacts related to these technologies including, but not limited to, human health and the environment, and socio-economic aspects, such as employment and labour.”
Its “general obligations” address human rights protections and “Integrity of democratic processes and respect for the rule of law.”
“Each Party shall adopt or maintain measures to ensure that the activities within the lifecycle of artificial intelligence systems are consistent with obligations to protect human rights, as enshrined in applicable international law and in its domestic law,” it says.
And, “Each Party shall adopt or maintain measures that seek to ensure that artificial intelligence systems are not used to undermine the integrity, independence and effectiveness of democratic institutions and processes, including the principle of the separation of powers, respect for judicial independence and access to justice.”
Further, “Each Party shall adopt or maintain measures that seek to protect its democratic processes in the context of activities within the lifecycle of artificial intelligence systems, including individuals’ fair access to and participation in public debate, as well as their ability to freely form opinions.”
Articles in the document address “human dignity and individual autonomy,” transparency, accountability, nondiscrimination, privacy, reliability, and “safe innovation.”
In a chapter on remedies, it says, “Each Party shall, to the extent remedies are required by its international obligations and consistent with its domestic legal system, adopt or maintain measures to ensure the availability of accessible and effective remedies for violations of human rights resulting from the activities within the lifecycle of artificial intelligence systems.”
That chapter says, “each Party shall adopt or maintain measures including:”
a) measures to ensure that relevant information regarding artificial intelligence systems which have the potential to significantly affect human rights and their relevant usage is documented, provided to bodies authorised to access that information and, where appropriate and applicable, made available or communicated to affected persons;
b) measures to ensure that the information referred to in subparagraph a is sufficient for the affected persons to contest the decision(s) made or substantially informed by the use of the system, and, where relevant and appropriate, the use of the system itself; and
c) an effective possibility for persons concerned to lodge a complaint to competent authorities.
On risk assessment and mitigation it says “Each Party shall, taking into account the principles set forth in Chapter III, adopt or maintain measures for the identification, assessment, prevention and mitigation of risks posed by artificial intelligence systems by considering actual and potential impacts to human rights, democracy and the rule of law.”
Such measures should:
a) take due account of the context and intended use of artificial intelligence systems, in particular as concerns risks to human rights, democracy, and the rule of law;
b) take due account of the severity and probability of potential impacts;
c) consider, where appropriate, the perspectives of relevant stakeholders, in particular persons whose rights may be impacted;
d) apply iteratively throughout the activities within the lifecycle of the artificial intelligence system;
e) include monitoring for risks and adverse impacts to human rights, democracy, and the rule of law;
f) include documentation of risks, actual and potential impacts, and the risk management approach; and g require, where appropriate, testing of artificial intelligence systems before making them available for first use and when they are significantly modified.
The COE also released an accompanying 33-page “explanatory report” that runs through elements of the document.
The Future of Privacy Forum in a June analysis said, “States Parties to the Framework Convention will have to adopt appropriate legislative and administrative measures which give effect to the provisions of this instrument in their domestic laws.”
FPF said, “In this way, the Framework Convention has the potential to affect ongoing national and regional efforts to design and adopt binding AI laws, and may be uniquely positioned to advance interoperability.”